Agile, DevOps & Enterprise Architecture – War or Peace?

I recently caught up with John Willis (co-author of the DevOps handbook). We were both on a call with a client and after that hosting the CNCF & DevOps Meetup in Wellington with our good friend BMK. On that day the good old question “Do organisations need Enterprise Architecture (EA) in times of Agile & DevOps?” came up again.

The timing for this question was perfect, because I spent many years in Enterprise Architecture and Agile, while Johns background is obviously DevOps and Agile, but we both have the same view: Yes, absolutely! And here’s why.

What is Enterprise Architecture?

This question is fundamental, because once you understand what Enterprise Architecture is you can derive the answer. The problem with the original question is really homemade.
Because many organisations call some of their architects Enterprise Architects, when they are really not, causes confusion. If you are looking after an Enterprise wide virtualisation platform for example you are NOT an Enterprise Architect in an Enteprise Architecture sense. I’ve written a more detailed piece on the different types of architects here.

Simply put, an Enterprise Architect helps define mission, vision, goals and the strategies to accomplish those goals. Mr & Mrs EA then help determine which capabilities (people, process & technology) the organisation needs to build in order to be able to execute the strategies that help accomplish the goals. A great framework depicting how this fits together is the means-to-end framework:

The ‘Tactical’ layer defines the programs, projects, products and services (i.e. initiatives) that build or uplift the desired capabilities. You can of course also find more scientific definitions of EA on the internet, but this is a simple and practical explanation of what EA is which works well for me.

And now I am handing over to Agile and DevOps

And that’s the moment where Agile and DevOps comes into play.
DevOps principles and the Agile way of working are methodology options (just like manual & waterfall) how you prioritise, define, build, test and deliver capabilities or capability uplifts. You could call Agile and DevOps also ‘enabling capabilities’ if you like.
And that’s really it. Simples.

Now, it’s easy to see how things get confusing when people who write Java Enterprise Edition (J2EE) based software get referred to as Enterprise Architects. But the problem is not with Enterprise Architecture, the problem is that those people are simply not Enterprise Architects.

If such a confusion exists in your organisation or with your customers, it might be worth running a workshop and getting the categorisation and scope of all the architects in your organisation examined and clarified. Examples of different types of architects are: systems, software, technical, infrastructure, operations, presales, delivery, business and enterprise architects.

Get them in a room to land on a common definition and understanding. It might get heated, but that’s OK. Terminally nice is way worse than passionate, respectful exchanges.

Keen to get your thoughts,
Andreas

Progressive Transformation – A Reference Architecture

IDC defines Progressive Transformation as the modernisation of systems through gradual replacement of technology through a prioritised catalogue of business functionality. It leverages open APIs, agile development and cloud architectures.

I think IDC’s definition is a good start, but it needs to go further. It needs to extend into capabilities not just functionality. Successful transformation considers the evolution of people (incl. skills & culture) as well as the process dimension. Let me share what I’ve witnessed over the past few years across different industries.

This reference architecture brings together several concepts:

Context – Why does the world need this?

Over the last 4 years, while working across different industries in the ‘digital’ space I have witnessed the necessity to compete at speed while retaining a high level of quality. The learnings and observations are compiled into this reference architecture. It addresses the need for incumbent organisations to step up as nimble digital disrupters enter industries with the aim to compete with specific products or services of the incumbent’s value chain. Being small, nimble and fast and without slow-to-change legacy environments, slowly but surely those disrupters are reducing the customer value proposition and share of wallet of existing players.
This reference architecture helps established industry players to compete against new nimble entrants.

Intro – A Historic View

‘Digitally Infused Business Transformation’ is going on already for quite a while. Progressive means continually and ongoing, not big bang. Interestingly, 10 to 15 years ago we weren’t really talking about ‘Digital Transformation’ that much – just projects really – but whenever I see technology used to alter (and hopefully improve) customer or employee experience – meaning it changes people, process AND technology then I deem it Digital Transformation.

What has changed though the most is
a) Of course the technologies used but also
b) The expected outcomes in terms of user experience and speed of delivery of new features.
The reason this is important is because it directly alters the solution architectures, designs and impacts the way of working of delivery teams. Less and less are people willing to wait months for an important new feature.

A quick history detour

Around the year 2000 we developed web portals in ASP/PHP and a database running on clustered servers, business applications were written in J2EE using EJBs/JPA and hibernate. In 2005, I developed mobile applications using the .NET compact framework. Even though high cohesion and loose coupling has always been a design principle, back then we didn’t really talk about APIs, distributed integration, and microservices. Gantt charts and project plans were a must. The Agile Manifesto was born 2001. Domain Driven Design just came to live around 2003 and Linux Containers around 2008. In 2004, my team implemented for the first time ‘daily build and smoke tests’ which I consider the predecessor of today’s CI/CD.
All those individual evolutions in combination with recognising organisation culture as a key enabler to create high performing organisations have led to a perfect storm which today drives Digital Transformation.

Enough reminiscing.

Conceptual Reference Architecture

I am now introducing a conceptual reference architecture to enable IDC’s Progressive Transformation which my colleagues and I have applied and used as a starting point within successful customer engagements. This reference architecture has even found its way into strategy documents in our client base.

Before I start I’d like to recognise different architectural layers. From top to bottom these are:

  • Contextual Architecture
    Describes the wider context of a system/solution for example an industry, geography or regulatory requirements
  • Conceptual Architecture
    Captures specific architectural concepts that establish the guardrails for the logical and physical architecture as well as the lower level designs
  • Logical Architecture
    Details further logical components of the conceptual architecture, for example internal and external APIs
  • Physical Architecture
    Maps the logical components to the physical infrastructure components, for example the components are running on multiple Kubernetes clusters or servers in different regions.
Reference Architecture for Progressive Transformation

I leave the topic of Day 2 operational concerns, Agile DevOps teams and Service Reliability Engineering for core platforms and systems out for this post, but they are nevertheless important. That’s essentially what the yellow box on the right is for.

The 3 red bars are part of the fast cadence, fast rate of change paradigm, where Agile and DevOps enabled teams work with the customers to drive the desired features and functions through fast feedback loops. Those feature teams are responsible for the development as well as the running of those services (DevOps) teams. There is no throwing over the fence on Friday afternoon. Organisations often prefer a microservices architecture, although it’s not mandatory.

The Mode 2 architecture layers

  • Experience Layer
    APIs that define the services endpoints across all channels, e.g. for web or mobile applications. A BFF shim would be part of such a layer.
  • Business Logic Layer
    The business logic layer encapsulates organisational business logic. This can be either straight through code of Java/PHP/C++/.NET applications or through business process or business rules engines.
  • The ‘Smart’ Data Services Layer
    This one I call smart, because there needs to be quite some thinking behind how you engineer this layer. Macquarie Bank for example has put a Cassandra database into this layer. This often leads to a physical data replication and additional data synchronisation efforts. That’s why the reference architecture recommends a virtualised data layer backed by clustered in-memory data grid for speed and de-duplication of physical data. This is mostly for READ operations, while the transactions are still processed through the backends. A Kafka backed event stream can ensure the backends are not overloaded.
  • Semantic integration between those layers is handled through distributed, lightweight, containerised integrations instead of a monolithic ESB appliance. Service dependencies are handled through a Service Mesh such as Istio.

Most importantly is that those different architectural components are hosted on a platform which abstracts low level concerns such as storage, networking, compute, O/S away for the feature teams to focus on customer requirements with a starting point as close as possible to the customer aka as high as possible up the value chain.

Mode 1

Underneath we have the Mode 1 layer. This is made up of mostly existing monolithic middleware components and core systems which feed the mode 2 layer above. That said, it can also be your Salesforce CRM system that contains important data that service/product teams need to draw upon. Those systems are generally maintained by traditional ops teams. Upgrades and migrations are often executed in a traditional waterfall plan-do Both systems and ops teams are commissioned to keep the lights on, not for speed of change. Business critical information assets need to be backup-ed and restored, batch processes and maintenance routines run (processes). This Mode 1 paradigm is also important to not have all people at once change the way they work. Agile is less frequent in those teams, although in progressive organisations I see Mode 1 teams generate upward pressure to their managers as they too want to use new tools, technologies and ways of working. This is where automation and a road map towards service reliability engineering (SRE) can become important to keep growth mindset staff engaged and progressing.

To summarise, those concepts marry new modern ways of application design (microservices, Domain Driven Design) and modern ways of working (DevOps & Agile) with existing legacy systems and mode 1 operations. This combination allows incumbent industry players to compete with digital disrupters in their own or even adjacent industries.

Keen to get your thoughts,
Andreas

SAP, HANA and how your ERP migration onto the right operating system leads to a competitve advantage

If you are only interested in the hard facts on SAP, HANA and why Red Hat Enterprise Linux is the most sensible choice scroll down to “SAP & The Good News”. Otherwise, keep on reading.

I do like Karate and Boxing. I go there weekly and I often get beaten up, but I do go back.  It’s hard but rewarding. I also worked on a large scale Enterprise Resource Planning system (ERP) migration, but I won’t go back and do it again unless I have to.

Below is why and some lessons learned.

A little bit of history

The ERP migration project of doom was done in stages.
Stage 1 included General Ledger, Fixed Assets, AR, AP and Cash Management. We created a dedicated integration hub based on Camel/Fuse to daily synch the financial transactions between the old and new (Oracle Business Suite) ERP. We had to hire a full-time employee just to maintain transactional integrity on a daily basis. I won’t go into details with regards to month-end, quarter-end or end-of-FY processing, but you get the idea.

Stage 2 was originally planned to include everything else. All process domains including Order to Cash (O2C – warehousing, logistics, COGS accounting, deferred COGS etc), Call to Service (onsite and remote support processes across APAC), Implement to Deliver (installation of hardware equipment to deliver annuity revenue), Procure to Pay, Accounting to Reporting, Forecast to Plan, Project Accounting, etc. The scope was BIG. And because it touches every process domain, ERP migrations are often referred to as open heart surgery. Another rather interesting fact was that the ‘leadership’ team had already pre-determined the completion date before the initial analysis (in good old waterfall style) was completed.

But hold on, it gets better. Because it was pretty soon pretty clear, that the timelines were unrealistic, the scope was cut down, which sounds a reasonable thing to do. Unfortunately, ERP systems aren’t really designed well for integrating with other ERP systems. They prefer to be THE BOSS in da house.

A simple example within your O2C domain could be that your inventory master might not be the same ERP system that your front end order portal connects to. This means that you need to put additional integration and business logic in place to make sure your customer expectations are met with regards to stock levels, back orders and delivery timeframes. This then spills over into billing processes, COGS accounting, revenue recognition, and reporting. And that’s just a simple order example. I am sure you understand the level of complexity created from the idea of ‘reducing scope’ – the reduction of scope by 50% created an estimated 1000 times higher complexity. This then increases risk, cost and potentially even nullifies the original business case around the entire program of work.

A video summary on this is available here.
[youtube https://www.youtube.com/watch?v=5p8wTOr8AbU]

The toll such mismanagement takes on people (and unfortunately mainly on those who care) is tremendous. We had people who didn’t have a weekend in 3 month! 

Things you might want to consider before you move onto SAP HANA 

Now, one ‘thing’ that caused complexity and cost, was that all the business logic was actually hardcoded in the old ERP system, plus the new ERP system wasn’t a greenfield implementation either, because other regional OPCO’s were already on it. The legacy ERP systems integrator (SI) charged us $300 just for estimating the cost of a change request each time we tried to make the migration and integration easier. I see this as an indicator that procurement departments might not be best equipped to help draft business transformation-related sourcing contracts, but that’s part of a different discussion.

But even without that, having to transport business logic from one system into another is hard and if you can avoid it, I recommend to do so.

SAP & The Good News!

I attended the SAP FKOM earlier this year. Upon my inquiry, I found out that SAP now endorses the “hug the monolith” or “strangle the monolith” pattern. It’s the same thing, which one you like better depends probably on how affectionate you are towards your fellow humans or whether you’ve got a handle on your anger management issues (which can easily become important when you work 3 months straight without a weekend).

It basically means: “Don’t put customisations into your core SAP system, but rather around it”, like for example into microservices or smaller units of code inside open source technologies such as Java or Python (any technology whose associated skillsets are easy to find in your market is good!) and use it to drive a DevOps culture. If you use SAP HANA in conjunction with Red Hat OpenShift, then you have enabled what Gartner calls bi-modal IT.

The corresponding architecture would look similar to below:

Conceptual Architecture of SAP HANA within an Microservices, Kubernetes and DevOps enabled environment

The main capabilities of your platform need to be

  • All architectural layers can be hosted on the platform
    • Experience layer – API, Web/Mobile UI
    • Business Logic – Business rules, business process management, Java, Python, AI/ML, FaaS etc
    • Smart data services – a virtualised in-memory data services engine that lets you slice & dice the data you need to fuel your microservices.
    • Agile integration – distributed, container-based, scalable integration that supports backend and front-end connectivity
  • The core components of your platform are backed by large open source communities like the ones behind Kubernetes, Docker, CRI-O, OVS, Ceph
  • There is a stable, profit-generating vendor supporting your platform that provides enterprise-ready support for prod and other important environments
  • Your operating system is certified across all major hybrid and multi-cloud players and can run both your container platform and SAP/SAP HANA.

Another benefit behind this architecture is that it allows for an evolution towards a multi-cloud ready platform based operating model, which IDC and others publicly promote as the way forward. The benefits of this operating model and the importance of open source as part of it are summarized in this short 8 pages whitepaper I co-authored with Accenture.

Next Steps

While upgrading or migrating your ERP system onto the newer SAP HANA version, organisations can be smart about the customisations they need and put them into containers and microservices on an enterprise-ready multi-cloud container platform.

This then translates into

  • Cost savings and lower risk via an easier upgrade path for non-customised ERP systems in the future; 
  • Increased revenue and business agility, as well as reduced cost in the backend, all through the support of a DevOps and Microservices empowered delivery platform for both backend and customer-facing products and services.

The no-brainer choice on the SAP HANA Operating System

SAP HANA comes with 2 supported Operating Systems. Suse Linux Enterprise Server and Red Hat Enterprise Linux. But you don’t really have to spend a lot of time making this choice. If you want to transform your business and/or leverage DevOps, Containers and a Microservices architecture Red Hat Enterprise Linux is the sensible choice.

Only Red Hat Enterprise Linux builds the multi-hybrid cloud ready foundation for an enterprise-ready container platform (OpenShift) as well as for SAP and SAP HANA. So if you want the best return on investment (ROI) on your mandatory SAP upgrade, Red Hat’s Enterprise Linux is the way to go. 

The Value Chain view

SAPHANA_Wardley

The value chain point of view is depicted above:

  1. Choosing the right operating system powers both your core systems and mode 2 enabling OpenShift platform and hence reduces operational cost and the requirement to hire rare and expensive skills
  2. By building upon an open source platform, innovation will automatically evolve your technology landscape
  3. The application of a platform based operating model allows organisations to innovate, experiment, learn and deliver new products & services faster. A higher learning rate leads to building the right things. DevOps enabled platforms help to build the right things fast.
  4. Above combined accelerates organisations towards the delivery of customer value and hence enhanced competitiveness

Organisations often struggle to move towards Digital. One reason for that is the new tools and technologies they need to utilise. By choosing the Red Hat Enterprise Linux operating system, organisations can pave the way to a successful digital transformation.

Keen to get your thoughts,
Andreas