Progressive Transformation – A Reference Architecture

IDC defines Progressive Transformation as the modernisation of systems through gradual replacement of technology through a prioritised catalogue of business functionality. It leverages open APIs, agile development and cloud architectures.

I think IDC’s definition is a good start, but it needs to go further. It needs to extend into capabilities not just functionality. Successful transformation considers the evolution of people (incl. skills & culture) as well as the process dimension. Let me share what I’ve witnessed over the past few years across different industries.

This reference architecture brings together several concepts:

Context – Why does the world need this?

Over the last 4 years, while working across different industries in the ‘digital’ space I have witnessed the necessity to compete at speed while retaining a high level of quality. The learnings and observations are compiled into this reference architecture. It addresses the need for incumbent organisations to step up as nimble digital disrupters enter industries with the aim to compete with specific products or services of the incumbent’s value chain. Being small, nimble and fast and without slow-to-change legacy environments, slowly but surely those disrupters are reducing the customer value proposition and share of wallet of existing players.
This reference architecture helps established industry players to compete against new nimble entrants.

Intro – A Historic View

‘Digitally Infused Business Transformation’ is going on already for quite a while. Progressive means continually and ongoing, not big bang. Interestingly, 10 to 15 years ago we weren’t really talking about ‘Digital Transformation’ that much – just projects really – but whenever I see technology used to alter (and hopefully improve) customer or employee experience – meaning it changes people, process AND technology then I deem it Digital Transformation.

What has changed though the most is
a) Of course the technologies used but also
b) The expected outcomes in terms of user experience and speed of delivery of new features.
The reason this is important is because it directly alters the solution architectures, designs and impacts the way of working of delivery teams. Less and less are people willing to wait months for an important new feature.

A quick history detour

Around the year 2000 we developed web portals in ASP/PHP and a database running on clustered servers, business applications were written in J2EE using EJBs/JPA and hibernate. In 2005, I developed mobile applications using the .NET compact framework. Even though high cohesion and loose coupling has always been a design principle, back then we didn’t really talk about APIs, distributed integration, and microservices. Gantt charts and project plans were a must. The Agile Manifesto was born 2001. Domain Driven Design just came to live around 2003 and Linux Containers around 2008. In 2004, my team implemented for the first time ‘daily build and smoke tests’ which I consider the predecessor of today’s CI/CD.
All those individual evolutions in combination with recognising organisation culture as a key enabler to create high performing organisations have led to a perfect storm which today drives Digital Transformation.

Enough reminiscing.

Conceptual Reference Architecture

I am now introducing a conceptual reference architecture to enable IDC’s Progressive Transformation which my colleagues and I have applied and used as a starting point within successful customer engagements. This reference architecture has even found its way into strategy documents in our client base.

Before I start I’d like to recognise different architectural layers. From top to bottom these are:

  • Contextual Architecture
    Describes the wider context of a system/solution for example an industry, geography or regulatory requirements
  • Conceptual Architecture
    Captures specific architectural concepts that establish the guardrails for the logical and physical architecture as well as the lower level designs
  • Logical Architecture
    Details further logical components of the conceptual architecture, for example internal and external APIs
  • Physical Architecture
    Maps the logical components to the physical infrastructure components, for example the components are running on multiple Kubernetes clusters or servers in different regions.
Reference Architecture for Progressive Transformation

I leave the topic of Day 2 operational concerns, Agile DevOps teams and Service Reliability Engineering for core platforms and systems out for this post, but they are nevertheless important. That’s essentially what the yellow box on the right is for.

The 3 red bars are part of the fast cadence, fast rate of change paradigm, where Agile and DevOps enabled teams work with the customers to drive the desired features and functions through fast feedback loops. Those feature teams are responsible for the development as well as the running of those services (DevOps) teams. There is no throwing over the fence on Friday afternoon. Organisations often prefer a microservices architecture, although it’s not mandatory.

The Mode 2 architecture layers

  • Experience Layer
    APIs that define the services endpoints across all channels, e.g. for web or mobile applications. A BFF shim would be part of such a layer.
  • Business Logic Layer
    The business logic layer encapsulates organisational business logic. This can be either straight through code of Java/PHP/C++/.NET applications or through business process or business rules engines.
  • The ‘Smart’ Data Services Layer
    This one I call smart, because there needs to be quite some thinking behind how you engineer this layer. Macquarie Bank for example has put a Cassandra database into this layer. This often leads to a physical data replication and additional data synchronisation efforts. That’s why the reference architecture recommends a virtualised data layer backed by clustered in-memory data grid for speed and de-duplication of physical data. This is mostly for READ operations, while the transactions are still processed through the backends. A Kafka backed event stream can ensure the backends are not overloaded.
  • Semantic integration between those layers is handled through distributed, lightweight, containerised integrations instead of a monolithic ESB appliance. Service dependencies are handled through a Service Mesh such as Istio.

Most importantly is that those different architectural components are hosted on a platform which abstracts low level concerns such as storage, networking, compute, O/S away for the feature teams to focus on customer requirements with a starting point as close as possible to the customer aka as high as possible up the value chain.

Mode 1

Underneath we have the Mode 1 layer. This is made up of mostly existing monolithic middleware components and core systems which feed the mode 2 layer above. That said, it can also be your Salesforce CRM system that contains important data that service/product teams need to draw upon. Those systems are generally maintained by traditional ops teams. Upgrades and migrations are often executed in a traditional waterfall plan-do Both systems and ops teams are commissioned to keep the lights on, not for speed of change. Business critical information assets need to be backup-ed and restored, batch processes and maintenance routines run (processes). This Mode 1 paradigm is also important to not have all people at once change the way they work. Agile is less frequent in those teams, although in progressive organisations I see Mode 1 teams generate upward pressure to their managers as they too want to use new tools, technologies and ways of working. This is where automation and a road map towards service reliability engineering (SRE) can become important to keep growth mindset staff engaged and progressing.

To summarise, those concepts marry new modern ways of application design (microservices, Domain Driven Design) and modern ways of working (DevOps & Agile) with existing legacy systems and mode 1 operations. This combination allows incumbent industry players to compete with digital disrupters in their own or even adjacent industries.

Keen to get your thoughts,
Andreas