Agile, DevOps & Enterprise Architecture – War or Peace?

I recently caught up with John Willis (co-author of the DevOps handbook). We were both on a call with a client and after that hosting the CNCF & DevOps Meetup in Wellington with our good friend BMK. On that day the good old question “Do organisations need Enterprise Architecture (EA) in times of Agile & DevOps?” came up again.

The timing for this question was perfect, because I spent many years in Enterprise Architecture and Agile, while Johns background is obviously DevOps and Agile, but we both have the same view: Yes, absolutely! And here’s why.

What is Enterprise Architecture?

This question is fundamental, because once you understand what Enterprise Architecture is you can derive the answer. The problem with the original question is really homemade.
Because many organisations call some of their architects Enterprise Architects, when they are really not, causes confusion. If you are looking after an Enterprise wide virtualisation platform for example you are NOT an Enterprise Architect in an Enteprise Architecture sense. I’ve written a more detailed piece on the different types of architects here.

Simply put, an Enterprise Architect helps define mission, vision, goals and the strategies to accomplish those goals. Mr & Mrs EA then help determine which capabilities (people, process & technology) the organisation needs to build in order to be able to execute the strategies that help accomplish the goals. A great framework depicting how this fits together is the means-to-end framework:

The ‘Tactical’ layer defines the programs, projects, products and services (i.e. initiatives) that build or uplift the desired capabilities. You can of course also find more scientific definitions of EA on the internet, but this is a simple and practical explanation of what EA is which works well for me.

And now I am handing over to Agile and DevOps

And that’s the moment where Agile and DevOps comes into play.
DevOps principles and the Agile way of working are methodology options (just like manual & waterfall) how you prioritise, define, build, test and deliver capabilities or capability uplifts. You could call Agile and DevOps also ‘enabling capabilities’ if you like.
And that’s really it. Simples.

Now, it’s easy to see how things get confusing when people who write Java Enterprise Edition (J2EE) based software get referred to as Enterprise Architects. But the problem is not with Enterprise Architecture, the problem is that those people are simply not Enterprise Architects.

If such a confusion exists in your organisation or with your customers, it might be worth running a workshop and getting the categorisation and scope of all the architects in your organisation examined and clarified. Examples of different types of architects are: systems, software, technical, infrastructure, operations, presales, delivery, business and enterprise architects.

Get them in a room to land on a common definition and understanding. It might get heated, but that’s OK. Terminally nice is way worse than passionate, respectful exchanges.

Keen to get your thoughts,
Andreas

Progressive Transformation – A Reference Architecture

IDC defines Progressive Transformation as the modernisation of systems through gradual replacement of technology through a prioritised catalogue of business functionality. It leverages open APIs, agile development and cloud architectures.

I think IDC’s definition is a good start, but it needs to go further. It needs to extend into capabilities not just functionality. Successful transformation considers the evolution of people (incl. skills & culture) as well as the process dimension. Let me share what I’ve witnessed over the past few years across different industries.

This reference architecture brings together several concepts:

Context – Why does the world need this?

Over the last 4 years, while working across different industries in the ‘digital’ space I have witnessed the necessity to compete at speed while retaining a high level of quality. The learnings and observations are compiled into this reference architecture. It addresses the need for incumbent organisations to step up as nimble digital disrupters enter industries with the aim to compete with specific products or services of the incumbent’s value chain. Being small, nimble and fast and without slow-to-change legacy environments, slowly but surely those disrupters are reducing the customer value proposition and share of wallet of existing players.
This reference architecture helps established industry players to compete against new nimble entrants.

Intro – A Historic View

‘Digitally Infused Business Transformation’ is going on already for quite a while. Progressive means continually and ongoing, not big bang. Interestingly, 10 to 15 years ago we weren’t really talking about ‘Digital Transformation’ that much – just projects really – but whenever I see technology used to alter (and hopefully improve) customer or employee experience – meaning it changes people, process AND technology then I deem it Digital Transformation.

What has changed though the most is
a) Of course the technologies used but also
b) The expected outcomes in terms of user experience and speed of delivery of new features.
The reason this is important is because it directly alters the solution architectures, designs and impacts the way of working of delivery teams. Less and less are people willing to wait months for an important new feature.

A quick history detour

Around the year 2000 we developed web portals in ASP/PHP and a database running on clustered servers, business applications were written in J2EE using EJBs/JPA and hibernate. In 2005, I developed mobile applications using the .NET compact framework. Even though high cohesion and loose coupling has always been a design principle, back then we didn’t really talk about APIs, distributed integration, and microservices. Gantt charts and project plans were a must. The Agile Manifesto was born 2001. Domain Driven Design just came to live around 2003 and Linux Containers around 2008. In 2004, my team implemented for the first time ‘daily build and smoke tests’ which I consider the predecessor of today’s CI/CD.
All those individual evolutions in combination with recognising organisation culture as a key enabler to create high performing organisations have led to a perfect storm which today drives Digital Transformation.

Enough reminiscing.

Conceptual Reference Architecture

I am now introducing a conceptual reference architecture to enable IDC’s Progressive Transformation which my colleagues and I have applied and used as a starting point within successful customer engagements. This reference architecture has even found its way into strategy documents in our client base.

Before I start I’d like to recognise different architectural layers. From top to bottom these are:

  • Contextual Architecture
    Describes the wider context of a system/solution for example an industry, geography or regulatory requirements
  • Conceptual Architecture
    Captures specific architectural concepts that establish the guardrails for the logical and physical architecture as well as the lower level designs
  • Logical Architecture
    Details further logical components of the conceptual architecture, for example internal and external APIs
  • Physical Architecture
    Maps the logical components to the physical infrastructure components, for example the components are running on multiple Kubernetes clusters or servers in different regions.
Reference Architecture for Progressive Transformation

I leave the topic of Day 2 operational concerns, Agile DevOps teams and Service Reliability Engineering for core platforms and systems out for this post, but they are nevertheless important. That’s essentially what the yellow box on the right is for.

The 3 red bars are part of the fast cadence, fast rate of change paradigm, where Agile and DevOps enabled teams work with the customers to drive the desired features and functions through fast feedback loops. Those feature teams are responsible for the development as well as the running of those services (DevOps) teams. There is no throwing over the fence on Friday afternoon. Organisations often prefer a microservices architecture, although it’s not mandatory.

The Mode 2 architecture layers

  • Experience Layer
    APIs that define the services endpoints across all channels, e.g. for web or mobile applications. A BFF shim would be part of such a layer.
  • Business Logic Layer
    The business logic layer encapsulates organisational business logic. This can be either straight through code of Java/PHP/C++/.NET applications or through business process or business rules engines.
  • The ‘Smart’ Data Services Layer
    This one I call smart, because there needs to be quite some thinking behind how you engineer this layer. Macquarie Bank for example has put a Cassandra database into this layer. This often leads to a physical data replication and additional data synchronisation efforts. That’s why the reference architecture recommends a virtualised data layer backed by clustered in-memory data grid for speed and de-duplication of physical data. This is mostly for READ operations, while the transactions are still processed through the backends. A Kafka backed event stream can ensure the backends are not overloaded.
  • Semantic integration between those layers is handled through distributed, lightweight, containerised integrations instead of a monolithic ESB appliance. Service dependencies are handled through a Service Mesh such as Istio.

Most importantly is that those different architectural components are hosted on a platform which abstracts low level concerns such as storage, networking, compute, O/S away for the feature teams to focus on customer requirements with a starting point as close as possible to the customer aka as high as possible up the value chain.

Mode 1

Underneath we have the Mode 1 layer. This is made up of mostly existing monolithic middleware components and core systems which feed the mode 2 layer above. That said, it can also be your Salesforce CRM system that contains important data that service/product teams need to draw upon. Those systems are generally maintained by traditional ops teams. Upgrades and migrations are often executed in a traditional waterfall plan-do Both systems and ops teams are commissioned to keep the lights on, not for speed of change. Business critical information assets need to be backup-ed and restored, batch processes and maintenance routines run (processes). This Mode 1 paradigm is also important to not have all people at once change the way they work. Agile is less frequent in those teams, although in progressive organisations I see Mode 1 teams generate upward pressure to their managers as they too want to use new tools, technologies and ways of working. This is where automation and a road map towards service reliability engineering (SRE) can become important to keep growth mindset staff engaged and progressing.

To summarise, those concepts marry new modern ways of application design (microservices, Domain Driven Design) and modern ways of working (DevOps & Agile) with existing legacy systems and mode 1 operations. This combination allows incumbent industry players to compete with digital disrupters in their own or even adjacent industries.

Keen to get your thoughts,
Andreas

SAP, HANA and how your ERP migration onto the right operating system leads to a competitve advantage

If you are only interested in the hard facts on SAP, HANA and why Red Hat Enterprise Linux is the most sensible choice scroll down to “SAP & The Good News”. Otherwise, keep on reading.

I do like Karate and Boxing. I go there weekly and I often get beaten up, but I do go back.  It’s hard but rewarding. I also worked on a large scale Enterprise Resource Planning system (ERP) migration, but I won’t go back and do it again unless I have to.

Below is why and some lessons learned.

A little bit of history

The ERP migration project of doom was done in stages.
Stage 1 included General Ledger, Fixed Assets, AR, AP and Cash Management. We created a dedicated integration hub based on Camel/Fuse to daily synch the financial transactions between the old and new (Oracle Business Suite) ERP. We had to hire a full-time employee just to maintain transactional integrity on a daily basis. I won’t go into details with regards to month-end, quarter-end or end-of-FY processing, but you get the idea.

Stage 2 was originally planned to include everything else. All process domains including Order to Cash (O2C – warehousing, logistics, COGS accounting, deferred COGS etc), Call to Service (onsite and remote support processes across APAC), Implement to Deliver (installation of hardware equipment to deliver annuity revenue), Procure to Pay, Accounting to Reporting, Forecast to Plan, Project Accounting, etc. The scope was BIG. And because it touches every process domain, ERP migrations are often referred to as open heart surgery. Another rather interesting fact was that the ‘leadership’ team had already pre-determined the completion date before the initial analysis (in good old waterfall style) was completed.

But hold on, it gets better. Because it was pretty soon pretty clear, that the timelines were unrealistic, the scope was cut down, which sounds a reasonable thing to do. Unfortunately, ERP systems aren’t really designed well for integrating with other ERP systems. They prefer to be THE BOSS in da house.

A simple example within your O2C domain could be that your inventory master might not be the same ERP system that your front end order portal connects to. This means that you need to put additional integration and business logic in place to make sure your customer expectations are met with regards to stock levels, back orders and delivery timeframes. This then spills over into billing processes, COGS accounting, revenue recognition, and reporting. And that’s just a simple order example. I am sure you understand the level of complexity created from the idea of ‘reducing scope’ – the reduction of scope by 50% created an estimated 1000 times higher complexity. This then increases risk, cost and potentially even nullifies the original business case around the entire program of work.

A video summary on this is available here.
[youtube https://www.youtube.com/watch?v=5p8wTOr8AbU]

The toll such mismanagement takes on people (and unfortunately mainly on those who care) is tremendous. We had people who didn’t have a weekend in 3 month! 

Things you might want to consider before you move onto SAP HANA 

Now, one ‘thing’ that caused complexity and cost, was that all the business logic was actually hardcoded in the old ERP system, plus the new ERP system wasn’t a greenfield implementation either, because other regional OPCO’s were already on it. The legacy ERP systems integrator (SI) charged us $300 just for estimating the cost of a change request each time we tried to make the migration and integration easier. I see this as an indicator that procurement departments might not be best equipped to help draft business transformation-related sourcing contracts, but that’s part of a different discussion.

But even without that, having to transport business logic from one system into another is hard and if you can avoid it, I recommend to do so.

SAP & The Good News!

I attended the SAP FKOM earlier this year. Upon my inquiry, I found out that SAP now endorses the “hug the monolith” or “strangle the monolith” pattern. It’s the same thing, which one you like better depends probably on how affectionate you are towards your fellow humans or whether you’ve got a handle on your anger management issues (which can easily become important when you work 3 months straight without a weekend).

It basically means: “Don’t put customisations into your core SAP system, but rather around it”, like for example into microservices or smaller units of code inside open source technologies such as Java or Python (any technology whose associated skillsets are easy to find in your market is good!) and use it to drive a DevOps culture. If you use SAP HANA in conjunction with Red Hat OpenShift, then you have enabled what Gartner calls bi-modal IT.

The corresponding architecture would look similar to below:

Conceptual Architecture of SAP HANA within an Microservices, Kubernetes and DevOps enabled environment

The main capabilities of your platform need to be

  • All architectural layers can be hosted on the platform
    • Experience layer – API, Web/Mobile UI
    • Business Logic – Business rules, business process management, Java, Python, AI/ML, FaaS etc
    • Smart data services – a virtualised in-memory data services engine that lets you slice & dice the data you need to fuel your microservices.
    • Agile integration – distributed, container-based, scalable integration that supports backend and front-end connectivity
  • The core components of your platform are backed by large open source communities like the ones behind Kubernetes, Docker, CRI-O, OVS, Ceph
  • There is a stable, profit-generating vendor supporting your platform that provides enterprise-ready support for prod and other important environments
  • Your operating system is certified across all major hybrid and multi-cloud players and can run both your container platform and SAP/SAP HANA.

Another benefit behind this architecture is that it allows for an evolution towards a multi-cloud ready platform based operating model, which IDC and others publicly promote as the way forward. The benefits of this operating model and the importance of open source as part of it are summarized in this short 8 pages whitepaper I co-authored with Accenture.

Next Steps

While upgrading or migrating your ERP system onto the newer SAP HANA version, organisations can be smart about the customisations they need and put them into containers and microservices on an enterprise-ready multi-cloud container platform.

This then translates into

  • Cost savings and lower risk via an easier upgrade path for non-customised ERP systems in the future; 
  • Increased revenue and business agility, as well as reduced cost in the backend, all through the support of a DevOps and Microservices empowered delivery platform for both backend and customer-facing products and services.

The no-brainer choice on the SAP HANA Operating System

SAP HANA comes with 2 supported Operating Systems. Suse Linux Enterprise Server and Red Hat Enterprise Linux. But you don’t really have to spend a lot of time making this choice. If you want to transform your business and/or leverage DevOps, Containers and a Microservices architecture Red Hat Enterprise Linux is the sensible choice.

Only Red Hat Enterprise Linux builds the multi-hybrid cloud ready foundation for an enterprise-ready container platform (OpenShift) as well as for SAP and SAP HANA. So if you want the best return on investment (ROI) on your mandatory SAP upgrade, Red Hat’s Enterprise Linux is the way to go. 

The Value Chain view

SAPHANA_Wardley

The value chain point of view is depicted above:

  1. Choosing the right operating system powers both your core systems and mode 2 enabling OpenShift platform and hence reduces operational cost and the requirement to hire rare and expensive skills
  2. By building upon an open source platform, innovation will automatically evolve your technology landscape
  3. The application of a platform based operating model allows organisations to innovate, experiment, learn and deliver new products & services faster. A higher learning rate leads to building the right things. DevOps enabled platforms help to build the right things fast.
  4. Above combined accelerates organisations towards the delivery of customer value and hence enhanced competitiveness

Organisations often struggle to move towards Digital. One reason for that is the new tools and technologies they need to utilise. By choosing the Red Hat Enterprise Linux operating system, organisations can pave the way to a successful digital transformation.

Keen to get your thoughts,
Andreas

Not all Open Source is made Equal

It is quite astounding to see how many proprietary software vendors have come out now claiming their products are Open Source.

If you walk back down memory lane, some of those very same vendors made fun of Open Source software not too long ago (including myself, but I won’t mention that). It’s OK though, everyone has a right to learn and grow at their own speed.

Whereas it’s quite clear why it’s great to do ‘Open Source’, consumers need to be aware of some nuances of Open Source Software (OSS), as it can impact ROI, speed of innovation and can even mean lock-in. Plus it’s where your missing innovation budget might be hiding.

Below are some aspects to consider when choosing your OSS stack and provider:

What? Why is this important?
Project VS Product Understand the difference between Open Source Software (OSS) projects versus product. A project is driven by the community, it’s the latest and greatest. An OSS provider should take the OSS project and make it enterprise ready, security harden it, ensure backwards compatibility, do testing and convert it into a supported OSS product and release bug/security fixes and put enhancements back into the OSS project.
An OSS vendor fully supports an Open Source product, whereas an Open Source project is supported by the community on a best-effort basis. Enterprises usually need a fully supported product and can’t rely on a best-effort support.
Enterprise Readiness Does the Open Source product provider understand ‘Enterprise Readiness’ such as 24×7 support, offer extended lifecycle support (ELS), patching, upgrades, updates, security response team with quick fix time SLAs, training, mentoring and Professional / Consulting Service offerings, a rich partner ecosystem and certified hardware and software stacks? If any of above are non-existent you need to question as to why and ascertain whether the OSS provider offers solutions you can rely on to run business critical functions.
Product components Are all product components open sourced or do specific components (eg connectors, add-ons, plug-ins) which you require, need a proprietary license to run? What happens if the license expires? Can you reboot and the software will still be running after the license has expired?
Upstream First Does the open source software product provider give the fixes back to the community project? By doing so the community version will be in alignment with the product. Not doing so is basically forking and the power of community powered innovation gets lost as the community will produce a different, non-compatible version.
Any vendor who does not contribute enhancements back into the upstream project is not open source. Open Source is about give and take, not giving back to the community is just take.
Stability VS Latest Features Does the Open Source Software provider guarantee backwards compatibility, migration paths or is there a risk of being forced to upgrade and re-engineer your business applications because the latest feature release is not backwards compatible? From a business perspective you want to be in charge when deciding when to upgrade and you should prefer a OSS provider who understands why stability is more important than the latest hype or feature.
Support life-cycle How long is the OSS providers’ support lifecycle – 1 year, 3 years, 10 years? You don’t want to be forced to upgrade every 1, 2 or 3 years because bug fixes are only available in the latest version.
Community viability In that instance size does matter. How large is the community behind an OSS project? Is the community perhaps just the vendor staff or is there a real non-vendor funded community driving the project innovations? How active is the community? What’s the commit frequency? How mature is the community behind a project?

If there is no real community behind an OSS project you are losing out on true community innovation power with slower innovation and bug-fix cycles than your competitor who uses a product which is based on large OSS community projects. This can affect your business innovation speed.

Project direction Is the project taking on established industry trends and standards such as packaging formats (eg Docker), patterns (eg EAI), platforms (PaaS/CaaS) or orchestration frameworks (Kubernetes)? This will determine whether your investments in the OSS product are paying off in the mid/long term future. Remember innovation happens way faster than ever before and is accelerating.
Ecosystem / stack support Does your OSS provider have other offerings up and down the software layer stack? Good News if they do, as it allows you to have a strategic partner you can work with and who understands your business, instead of calling many different vendors/service providers to get a production issue fixed, and none of them understanding the impact on your business or worse finger pointing and saying ‘not our fault’.
Subscriptions VS Licensing Do you need a license for your OSS product to run or is the subscription you buy merely to access the OSS providers auxiliary services such as updates, patching, security fixes, support etc? This is important because requiring a license to run a OSS product is similar to a vendor lock-in you get with a proprietary software vendor. A subscription in the contrary does not lock you in and you are free to move anytime. Longer license terms (eg 3 to 5 years) also significantly affect the accounting and potentially the OPEX / CAPEX classification of your software assets depending on your industry specific accounting rules.

A subscription is deemed a services, and as such is classified as OPEX. A license on the other hand is considered CAPEX. This is an important distinction, since your organisation may find itself under considerable CAPEX pressure, while the ability to reduce your fixed OPEX costs through the strategic deployment of OSS subscriptions could result in some significant cost-savings.

This is where your missing budget for innovation can come from.

I hope that helps to select the right partnerships in the Open Source space to make your business thrive in the Digital Economy.

Please let me know if you want different aspects added.

Andreas

On Architects

First things first: A TOGAF certification alone, does NOT make anyone an Enterprise Architect. There you have it.

Inspired by a recent article on linkedIn (which seems to have been removed now) I started to think about how we can get back on track bringing clarity into the language around Architecture roles. At the end of the day, a role title should help you communicate what you do and what your circle of concern is. This in turn brings clarity and value to the organisation you work for.

Clear communications is key in all our relationships. With our family, kids and in business. Ambiguity and confusion makes communications and hence relationships more difficult, can cause stress and waste time and money. In IT we see lots of ambiguity and confusion created – either through marketing material or through people who want fancy job titles. I remember meeting my first ‘Enterprise Architect’ at the W-JAX in Munich in the 90’s. I loved his title, he was absolutely an expert, but at coding in Java 2 Enterprise Edition. His session didn’t talk about business strategy, competitive analysis, market forces, business capability roadmaps or sales & operational planning processes. See where I am coming from?

The ambivalence in IT can be mind numbing – SOA, Cloud, API, Architecture roles are just a few to name. Marketing departments wanting to create a ‘me too’ in a specific field even though they don’t have an offering, which then creates confusion. An example is when back in 2006 AWS started to have ‘Cloud Computing’ conversations, simply utility based pay-per-use computing resource consumption. Then VMWare came along wanting to be a ‘me too’ coining the term ‘Private Cloud’ for something that was merely infrastructure virtualisation. And voila: ambiguity created! The result? Today you need to spend 15 minutes if you talk ‘Cloud Computing’ just to establish the context: IaaS, PaaS, SaaS, Private, Public, Hybrid or do you just mean infrastructure virtualisation really – and there are plenty other examples out there.

The author of above article on linkedIn has done a stellar job, up to the point where he introduces the term ‘Enterprise Solution Architect’ which is exactly the reason why the confusion started in the first place.

‘Enterprise’ Architects, meaning architects with the word ‘Enterprise’ in their title are plentiful, TRUE Enterprise Architects are only few. I meet many so called ‘Enterprise’ Architects throughout the year, unfortunately most of them lack

  • Involvement in implementing their companies’ strategy
  • Involvement in strategy workshops
  • Meetings with executives on a regular basis to discuss strategic initiatives
  • A business capability or strategic roadmap
  • Involvement in business transformation programmes – neither actively or as a steering committee member
  • Rarely understand more than 1 or 2 business and technical concepts on a 101 or 201 level (eg S&OP, Order to Cash, accounting, infrastructure, integration, information modelling, business process modelling or applications) to present the necessary architectural options and ramifications for the business.

Architects who lack above characteristics simple DO NOT make an Enterprise Architect, they are perhaps Technical Architects working in an ‘Enterprise’ instead. Technical Architects look after applications, data stores or infrastructure for example.

Developing into an Enterprise Architect requires multiple years of learning all different aspects of the business and the industry. I don’t see any other way a seasoned EA getting her job without being prepared to be the dumbest person in the room – in many meetings, over many month or even years. The resilience, ability and willingness to learn and doing so is paramount – cost centres, deferred COGS, SOX compliance, consignment notes, recovery, pick release, ship confirm, POD, service route optimisation, freight cost, sales and operational planning  – there’s lot to learn.

Here is an attempt to categorise Architecture roles graphically:

Screen Shot 2016-07-14 at 10.00.12

In summary, the important points in my view are:

  • Solution Architects are allrounders and the vital and necessary glue to deliver successful business outcomes
  • Solution Architects overlap with all other architecture roles. The reason is clear: In order to solution the right business outcomes you need to understand the Enterprise Context, strategic roadmaps, industry and technology well enough to have a meaningful conversation with both the business and the technologists in your organisation with the aim to ensure successful delivery
  • There can be other Architecture roles focussed on Information, Security, Data or specific Domains
  • There is no need to confuse things with the word ‘Enterprise’ such as ‘Enterprise Solutions Architect’. All solutions, projects or capabilities life automatically within the Enterprise Context
  • Organisations are different, which means above categorisation is not set in stone but a guide. The scope of each architecture role can be more or less focussed on aspects such as projects, technology, business, strategy etc. There can be many reasons for this, number of architects or the lifecycle stage of an organisation (see STARS Framework ) for example.

Using above categorisation instead of business card titles helps me better understand where architects are sitting within the Enterprise context, their circle of influence / circle of concerns, what topics to prepare for, how to best communicate and the areas I can learn from my interlocutor (I had to Google that word first :)).

Keen to get your view.

Living with your Enterprise Architecture Repository – A Recap

This article examines questions such as:

  • Have we achieved what we were set out to achieve with our Enterprise Architecture Repository (EAR)?
  • Have we created value through the EAR?
  • Have we evaluated the products according to what we value now as important?

Usage Scenarios

We have currently a team of around 30 – 40 people using our EAR to model our business processes and 6 architects creating architectural artefacts across the TOGAF Domains (BDAT).

Business Process Modelling

The process modellers work on our ERP transformation program or on separate business process optimisation projects.
Lessons Learned: We are glad that we have an EAR where our business processes have a predefined home. Equally important is however rolling out training to new starters and people who have done process modelling the ‘old’ way. Most important is to have an Enterprise wide process framework in place to fit all the different projects and programs related business processes together. Without a framework you will only ever end up in point-solutions or project focussed views with no chance to look at business processes across the Enterprise as a whole.

Human Resource Requirements

Due to the extended usage of our EAR we have now 3 EAR Administrators instead of the one admin resource initially. This is due to the higher workload of course, but what it requires is that all System Administrators and ‘Power Users’ share the same knowledge about the foundational principals such as meta-model, library structure and how core features like impact analysis work.
Furthermore other profiles with advanced access rights have to share a common set of guidelines and mapping standards to create EA assets in a consistent way. For example: Knowledge about our APQC PCF, our modelling standards, our versioning standards, the difference between AS-IS and TO-BE.

Access Control VS Roll-Out & Critical Mass

With external consultants coming in, half-knowledge about our EAR has proven dangerous: To satisfy reporting requirements the consultant introduced new relationships altering the meta-model in an inconsistent way. Over 6 month later we are still cleaning up those meta-model inconsistent changes. Giving too much access is probably a cultural aspect, which might not happen in other organisations, but in the initial phases you might struggle to get the critical mass together for an Enterprise wide roll-out, or don’t know the administration/access features well enough, so it’s good to be aware and specifically lock meta model altering access rights down.

Impact Analysis

The architecture team is often tasked with examining ‘what-if’ scenarios.
For example: What if only a single module goes live instead of the entire solution? Questions like this can still not be answered through our EAR. Even though you could customise the meta model to help answer those questions, it would require tremendous discipline to model projects to that level of detail right from the start.

Keeping things consistent

We have a tightly knit architecture team – all the architects sit in the same room as well, which makes it relatively easy to apply the same methodology or synch across the team. However if you don’t have this luxury, it might be good to define tool specific modelling standards before you roll out your EAR. Making changes to existing artefacts is always harder than doing it Right right from the start.

Most important product features

Over 1 year into the deployment of our EAR the following features have proven most important:

  • Report generation and information export
  • Web browser access with access for everyone in the organisation (using enterprise license, instead of seat license). The saying ‘Architecture is only alive if you can see it’ is very true. You need to be able to share EA assets and views with everyone. Exporting Visio diagrams or PDFs is not going to work as you are constantly updating your artefacts – remember Enterprise Architecture is a living thing. Being able to send out web URLs to specific documents or document versions has proven really useful – no outdated content anymore.
  • Visio user interface – Change management and onboarding has been fairly easy given that mostly all modellers have had previous experience with Visio
  • Access Control based on user profiles such as
    • EPMO Business Analyst – Read Access to all projects across the Enterprise project portfolio, CRUD access to business related artefacts, mostly processes in the TO-BE and AS-IS libraries
    • Business Architect – same as BA, but across all TOGAF domains
    • Project Analyst – restricted access to only specific project related folders
    • Domain Architect – Same Access as Enterprise Architect
    • Enterprise Architect – Full access, close to System Admin access
    • System Admin – Full access to user, library, meta model and fundamental repository configuration
    • Portal User – read only access to all content
  • Library & folder restructuring – We have now several times restructured our library folder structure to satisfy demand and ease of use across
    • AS-IS and TO-BE gap analysis
    • APQC process framework roll out
    • Project based access
    • Avoiding confusion for people with specific project based usage requirements
    • Creation of project and Enterprise wide views

So…

Would we have still examined the same features as we did originally during our vendor selection if we knew what we know today?

Yes and No.
Yes, because all the features we looked at are important still.
No, because we have learned what other features have proven useful and what’s really important and hence additional features would be added and the weighting would be different across all features.

Selecting your Enterprise Architecture Repository

Intro

After several re-curring forum conversations I thought I’d could be helpful to document the steps taken in order to procure and establish our Enterprise Architecture Repository (EAR).

Of course this is by no means the only way to find the right EA Repository for you but it worked for us and we are still very happy with our choice.

Context

Probably like many other organisation we started off creating our EA assets in Microsoft Word, Excel and Visio diagrams and stored it on shared file servers and our Document Management System DocuShare.

The problem with this approach – as you know – is that there is no underlying content meta model which semantically links the architecture artefacts. The consequence is that analysis needs to be done manually. You can write macros in Visio, Word & Excel but I don’t think that is time and effort well spent for an Enterprise Architect.

The Team

To get a broader view of requirements across the business I assembled a team comprising the following:

  • 2 Enterprise Architects
  • 2 Solution Architects
  • 2 SOX Compliance Officers
  • 1 National Quality Assurance Officer

Due to many conflicting projects and activities and only 1 Enterprise Architect being the ‘business owner’ of the EA Repository, we ran into several conflicting resource schedules. As you can only objectively score if you sit through the presentations  and demos of all vendors, that has been a challenge.

Fortunately, one of the Solution Architects and the National QA Manager were really dedicated, so that we ended up with 3 different scores we could average. I recommend to involve also an IT Operations representative, so that the requirements of the Application Portfolio Management component are represented if that’s a use case for your EAR within your organisation.

The Process

You won’t get it 100% right. 1 year down the track we are using the EAR in ways we didn’t think of, but that’s only a good thing as we are reaping rewards beyond what we have envisioned.

After high level requirements gathering, the process we followed was:

  1. Market Research & Product Shortlisting
  2. Requirements lock-down & Weighting
  3. Product Demonstration & Scoring

Market Research & Product Shortlisting

The company had a ‘all you can eat’ arrangement with Gartner and Forrester Research. That made it easy to execute a quick market research. We also talked to fellow Enterprise Architects and opted to include one product which wasn’t on the Gartner Magic Quadrant.

Screen shot 2014-06-11 at 8.57.54 PMGartner and Forrester have quite a comprehensive selection of papers on this topic. The documents we found most helpful were:

  • Gartner: Understanding the Eight Critical Capabilities of Enterprise Architecture Tools
  • Gartner: Select EA Tools Use Cases Are Not Optional
  • Gartner: Magic quadrant for Enterprise Architecture

After reading through the documents, I had scheduled a call with a Gartner Analyst on that topic to clarify my understanding. I asked specifically why a tool like Orbus iServer is not mentioned in the Magic Quadrant paper as it has been recommended to us from other Enterprise Architects and we knew that Cathay Pacific is using it, too and that they are happy with it.
I learned that the Magic Quadrant selection process also includes things like disclosing the product roadmap to Gartner, Gartner specific certifications and customer references. Not all of those have been satisfied by Orbus (trading under Seattle Software) and hence it didn’t make it into the Magic Quadrant. For us not a strong enough reason not to look at this product, especially after it came with strong recommendations and it was fully compatible with our existing EA assets which have been created with the Microsoft Office Suite.

The Magic Quadrant for us looked as per below screenshot at the time of evaluation. I recommend to get the latest report from Gartner if you’d like the latest view.

Screen shot 2014-06-11 at 9.29.06 PM

The Product Shortlist

After a first high level evaluation of the products in the market, research papers and recommendations we shortlisted the following products (vendors):

  • ARIS (Software AG)
  • Abacus (Avolution)
  • iServer (Orbus)

At first, alfabet was not on the shortlist. Software AG has just had acquired this product through the acquisition of planningIT. The Software AG technical representative offered an introduction and demonstration at short notice which fitted our schedule, hence we agreed to have a look at it as well. After the demo it was clear that this product is not what we are looking for in an EA Repository due to its rigidity of the prescribed process and the absence of a content meta model. I also downloaded iteratec’s iteraplan for a quick evaluation but found the tool not very user friendly.

Requirements Lock Down & Weighting

The evaluation group defined the evaluation criteria categories and weighting as follows:

ID Description AVG Weight
1 Repository & content meta model – capabilities & fit 8.8
2 Modeling – support for business process and EA modelling 9.4
3 Gap Analysis & Impact Analysis – ease of use, capabilities 8.4
4 Presentation – automatic generation & capability 7.2
5 Administration – system and user access administration 6
6 Configurability –  usage, access rights, output (not including content meta model) 6.8
7 Frameworks & Standards support – e.g. TOGAF, eTOM, BPMN, reference architectures 6.6
8 Usability – Intuitiveness of UI and administration 8.4
9 Adoption/Change Management – effort to roll-out and adopt 9
10 Fit for Purpose (Use case eval, risk, compliance, business requirements, customer centricity) 9
11 Extensibility / Integration ability with other systems 7.4
12 Vendor: Interactions – Responsiveness, quality, detail, professionalism, support 6.2
13 Supports Risk & Compliance (e.g. JSOX) tasks/initiatives 6.8
14 Supports Quality Management (ISO9001) tasks/initiatives 6.6
15 Gartner Research results & recommendations for suitability 4.6

The weight semantics were defined as: 0 – Irrelevant; 1 – Insignificant; 2 – Fairly unimportant; 3 – Somewhat unimportant; 4 – Nice to have (e.g. ease of use); 5 – Nice to have (increased productivity/efficiency); 6 – Somewhat important; 7 – Important; 8 – Fairly important; 9 – Very important (represents key requirements); 10 – Critical/extremely important (failure to satisfy requirements in this category will cause elimination of product)

Our Requirements

ID Category Description
1 10 Repository must be shared and accessible by all EA practitioners, Solution Architects, Business Analyst and Business stakeholders
2 1 Must allow for customised meta models
3 10 Existing assets (.ip – process files & visio diagrams) need to be converted and linked into meta-model
4 10 Built-in Version Control
5 11 Integration/Linkage with requirement system
6 11 Integration/Linkage with other systems WIKI, DocuShare, FileFolder
7 8 Must be able to deal with a large number of artefacts (10,000+) & performance tuning options
8 2 Must be able to understand & analyse complex (ontop, links, semantics of an relationship, 1:n, m:n) relationships between artefacts
9 2 Support Scenario (what-if) planning & scenario modelling
10 4 Support multiple/different stakeholder viewpoints & presentations
11 2 Facilitate implementation of business strategy, business outcomes and risk mitigation
12 2 Repository supports business, information, technology, solution and security viewpoints and their relationships. The repository must also support the enterprise context composed of environmental trends, business strategies and goals, and future-state architecture definition.
13 2 Modelling capabilities, which support all architecture viewpoints (business processes (BA), solution architecture (SA))
14 3 Decision analysis capabilities, such as gap analysis, impact analysis, scenario planning and system thinking.
15 4 Presentation capabilities, which are visual and/or interactive to meet the demands of a myriad of stakeholders. Presentations can be created via button click.
16 5 Administration capabilities, which enable security (access,read,write), user management and modeling tasks.
17 6 Configurability capabilities that are extensive, simple and straightforward to accomplish, while supporting multiple environments.
18 7 Support for frameworks (TOGAF, COBIT, eTOM), most commonly used while providing the flexibility to modify the framework.
19 8 Usability, including intuitive, flexible and easy-to-learn user interfaces.
20 2 Draft mode before publishing edited and new artefacts
21 1 Supports linking Business Motivation model ((Means)Mission, Strategy, tactics >>> (Ends) Vision, Goals, Objectives)
22 2 Needs to support multiple iterations of TOGAF (Architecture Capability, Development (AS-IS, TO-BE, Gap), Transition, Governance iterations)
23 2 Support for multiple notations(Archimate, UML) connecting semantics to the same content meta model
24 10 Repository Search and Browse capability for the entire organisation
25 3 Creation of Roadmaps
26 3  AS-IS, Transition & TO-BE state based gap analysis across processes, information, technology, Business reference models, application architectures and capabilities
27 10 Reverse-Engineering/Introspection Capabilities for Oracle eBusiness Suite/ERP
28 6 Ease of Editability of meta model relationships
29 2 Support for linking Strategic, Segment & Capability Architectures across architecture models, processes and roadmaps
30 6 Ease of Editability of meta model objects & attributes
31 3 Strategic, Segment & Capability architectures need to be referenceable across all models, building blocks and projects
32 8 Lock-down/Freezes on changes
33 8 Role based edit/view/read/write access restrictions
34 5 Administration & Configuration training will be delivered by vendor
35 10 Price within budget
36 3 Supports “is-aligned-with-roadmap” analysis via button clickc
37 7 Supports library concepts (lock down/freeze) for reference models, refrence architectures, Architecture/Solutions Building Blocks
38 9 Vendor has proven capabilities/support for change management efforts associated with the roll-out of a EA tool/repository
39 2 Supports multiple notation standards (Archimate, UML, TOGAF)
40 10 Preserves meta-data of exisiting FXA process model (mappings to Software and applications are imported and semantically correctly mapped)
41 10 preserves & understands meta-data of exisiting FXA visio models (BRM, Capability, etc)
42 11 Integration with Portfolio Management tools and Project Management tools
43 12 Alignment of what FXA needs with Gartner analysis
44 12 Provides technical/customer support during Australian business hours
45 12 Vendor pays attention to FXA requirements and business environment and build demos, questions & dialogues with FXA around it.
46 13 Must have different role-based user access levels for read, write, administration and public (browse) for different types of assets
47 13 Must not allow users to sign up for inappropriate level of access without permission
48 13 Writes access logs for successful, failed logon and user profile/role change logs
49 10 Supports the modelling, documentation, query & analysis of Customer Touchpoints

The Result

Once we finally received a quote we realised that it was beyond our budget hence we had to remove ARIS from the shortlist.

After use case demonstrations from the remaining vendors the evaluation team scored independently and came up

Abacus TOTAL 2399
iServer TOTAL 3582.2

This concluded the evaluation and made Orbus iServer a very clear choice for us.

Next Steps to Consider

  • Decide a content meta-model (TOGAF VS Archimate)
  • Repository structure & library setup to support automated roadmaps and gap analysis and to support projects
  • Import Application catalogue (application & interfaces, live date & status) (AS-IS)
  • Import existing EA assets (AS-IS and TO-BE): processes, Business Functional Reference Model, data models

Things to be aware of – Before you jump

  • Resourcing: There will be people necessary to administer, maintain and continuously update your Enterprise Architecture repository. Whenever there is a a large change coming which impacts your EAR, you need to understand that this can be a full time job for a little while
  • Licensing: Make sure your business case caters for growth and additional licenses. In case of iServer you need Visio and iServer seat licenses.
  • Training: Ensure you got a team that you can work with to roll out training. Especially across different domains: Business (BPMN, process modelling guidelines) and meta model extensions (eg Interfaces, RICEFW) and the correlating relationships.
  • Publish guides and reference material (we found a WIKI most useful!)
  • Standards & Reference models: You will have to spend time and effort to define your own standards (eg subset of BPMNv2.0 or APQC PCF)