There’s been a lot of hype lately around announced hybrid or multi-Cloud offerings.
Fact is, that you can do this already today with Red Hat OpenShift.
I’ve recorded a short 6 minutes video which demonstrates hybrid-multi Cloud in action with two OpenShift clusters:
pSeries 64 Little Endian and
AWS x86.
The video demonstrates the creation of a project and a simple application.
Even though the processor architecture is different, the exact same commands are being executed. This speaks to the power of a true hybrid-multi Cloud strategy as it saves organisations from re-building their IT organisation for each cloud provider.
There are of course other concerns such as storage layer replication (Ceph), multi-cluster management (MCM) and cloud provider independent automation (Ansible), but the main message remains the same: The required enterprise ready open source tools and technologies are available today to run a hybrid-multi Cloud.
There are over 1,000 organisations globally who run OpenShift in production.
If you are only interested in the hard facts on SAP, HANA and why Red Hat Enterprise Linux is the most sensible choice scroll down to “SAP & The Good News”. Otherwise, keep on reading.
I do like Karate and Boxing. I go there weekly and I often get beaten up, but I do go back. It’s hard but rewarding. I also worked on a large scale Enterprise Resource Planning system (ERP) migration, but I won’t go back and do it again unless I have to.
Below is why and some lessons learned.
A little bit of history
The ERP migration project of doom was done in stages. Stage 1 included General Ledger, Fixed Assets, AR, AP and Cash Management. We created a dedicated integration hub based on Camel/Fuse to daily synch the financial transactions between the old and new (Oracle Business Suite) ERP. We had to hire a full-time employee just to maintain transactional integrity on a daily basis. I won’t go into details with regards to month-end, quarter-end or end-of-FY processing, but you get the idea.
Stage 2 was originally planned to include everything else. All process domains including Order to Cash (O2C – warehousing, logistics, COGS accounting, deferred COGS etc), Call to Service (onsite and remote support processes across APAC), Implement to Deliver (installation of hardware equipment to deliver annuity revenue), Procure to Pay, Accounting to Reporting, Forecast to Plan, Project Accounting, etc. The scope was BIG. And because it touches every process domain, ERP migrations are often referred to as open heart surgery. Another rather interesting fact was that the ‘leadership’ team had already pre-determined the completion date before the initial analysis (in good old waterfall style) was completed.
But hold on, it gets better. Because it was pretty soon pretty clear, that the timelines were unrealistic, the scope was cut down, which sounds a reasonable thing to do. Unfortunately, ERP systems aren’t really designed well for integrating with other ERP systems. They prefer to be THE BOSS in da house.
A simple example within your O2C domain could be that your inventory master might not be the same ERP system that your front end order portal connects to. This means that you need to put additional integration and business logic in place to make sure your customer expectations are met with regards to stock levels, back orders and delivery timeframes. This then spills over into billing processes, COGS accounting, revenue recognition, and reporting. And that’s just a simple order example. I am sure you understand the level of complexity created from the idea of ‘reducing scope’ – the reduction of scope by 50% created an estimated 1000 times higher complexity. This then increases risk, cost and potentially even nullifies the original business case around the entire program of work.
A video summary on this is available here. [youtube https://www.youtube.com/watch?v=5p8wTOr8AbU]
The toll such mismanagement takes on people (and unfortunately mainly on those who care) is tremendous. We had people who didn’t have a weekend in 3 month!
Things you might want to consider before you move onto SAP HANA
Now, one ‘thing’ that caused complexity and cost, was that all the business logic was actually hardcoded in the old ERP system, plus the new ERP system wasn’t a greenfield implementation either, because other regional OPCO’s were already on it. The legacy ERP systems integrator (SI) charged us $300 just for estimating the cost of a change request each time we tried to make the migration and integration easier. I see this as an indicator that procurement departments might not be best equipped to help draft business transformation-related sourcing contracts, but that’s part of a different discussion.
But even without that, having to transport business logic from one system into another is hard and if you can avoid it, I recommend to do so.
SAP & The Good News!
I attended the SAP FKOM earlier this year. Upon my inquiry, I found out that SAP now endorses the “hug the monolith” or “strangle the monolith” pattern. It’s the same thing, which one you like better depends probably on how affectionate you are towards your fellow humans or whether you’ve got a handle on your anger management issues (which can easily become important when you work 3 months straight without a weekend).
It basically means: “Don’t put customisations into your core SAP system, but rather around it”, like for example into microservices or smaller units of code inside open source technologies such as Java or Python (any technology whose associated skillsets are easy to find in your market is good!) and use it to drive a DevOps culture. If you use SAP HANA in conjunction with Red Hat OpenShift, then you have enabled what Gartner calls bi-modal IT.
The corresponding architecture would look similar to below:
The main capabilities of your platform need to be
All architectural layers can be hosted on the platform
Experience layer – API, Web/Mobile UI
Business Logic – Business rules, business process management, Java, Python, AI/ML, FaaS etc
Smart data services – a virtualised in-memory data services engine that lets you slice & dice the data you need to fuel your microservices.
Agile integration – distributed, container-based, scalable integration that supports backend and front-end connectivity
The core components of your platform are backed by large open source communities like the ones behind Kubernetes, Docker, CRI-O, OVS, Ceph
There is a stable, profit-generating vendor supporting your platform that provides enterprise-ready support for prod and other important environments
Your operating system is certified across all major hybrid and multi-cloud players and can run both your container platform and SAP/SAP HANA.
Another benefit behind this architecture is that it allows for an evolution towards a multi-cloud ready platform based operating model, which IDC and others publicly promote as the way forward. The benefits of this operating model and the importance of open source as part of it are summarized in this short 8 pages whitepaper I co-authored with Accenture.
Next Steps
While upgrading or migrating your ERP system onto the newer SAP HANA version, organisations can be smart about the customisations they need and put them into containers and microservices on an enterprise-ready multi-cloud container platform.
This then translates into
Cost savings and lower risk via an easier upgrade path for non-customised ERP systems in the future;
Increased revenue and business agility, as well as reduced cost in the backend, all through the support of a DevOps and Microservices empowered delivery platform for both backend and customer-facing products and services.
The no-brainer choice on the SAP HANA Operating System
SAP HANA comes with 2 supported Operating Systems. Suse Linux Enterprise Server and Red Hat Enterprise Linux. But you don’t really have to spend a lot of time making this choice. If you want to transform your business and/or leverage DevOps, Containers and a Microservices architecture Red Hat Enterprise Linux is the sensible choice.
Only Red Hat Enterprise Linux builds the multi-hybrid cloud ready foundation for an enterprise-ready container platform (OpenShift) as well as for SAP and SAP HANA. So if you want the best return on investment (ROI) on your mandatory SAP upgrade, Red Hat’s Enterprise Linux is the way to go.
The Value Chain view
The value chain point of view is depicted above:
Choosing the right operating system powers both your core systems and mode 2 enabling OpenShift platform and hence reduces operational cost and the requirement to hire rare and expensive skills
By building upon an open source platform, innovation will automatically evolve your technology landscape
The application of a platform based operating model allows organisations to innovate, experiment, learn and deliver new products & services faster. A higher learning rate leads to building the right things. DevOps enabled platforms help to build the right things fast.
Above combined accelerates organisations towards the delivery of customer value and hence enhanced competitiveness
Organisations often struggle to move towards Digital. One reason for that is the new tools and technologies they need to utilise. By choosing the Red Hat Enterprise Linux operating system, organisations can pave the way to a successful digital transformation.
It is quite astounding to see how many proprietary software vendors have come out now claiming their products are Open Source.
If you walk back down memory lane, some of those very same vendors made fun of Open Source software not too long ago (including myself, but I won’t mention that). It’s OK though, everyone has a right to learn and grow at their own speed.
Whereas it’s quite clear why it’s great to do ‘Open Source’, consumers need to be aware of some nuances of Open Source Software (OSS), as it can impact ROI, speed of innovation and can even mean lock-in. Plus it’s where your missing innovation budget might be hiding.
Below are some aspects to consider when choosing your OSS stack and provider:
What?
Why is this important?
Project VS Product
Understand the difference between Open Source Software (OSS) projects versus product. A project is driven by the community, it’s the latest and greatest. An OSS provider should take the OSS project and make it enterprise ready, security harden it, ensure backwards compatibility, do testing and convert it into a supported OSS product and release bug/security fixes and put enhancements back into the OSS project.
An OSS vendor fully supports an Open Source product, whereas an Open Source project is supported by the community on a best-effort basis. Enterprises usually need a fully supported product and can’t rely on a best-effort support.
Enterprise Readiness
Does the Open Source product provider understand ‘Enterprise Readiness’ such as 24×7 support, offer extended lifecycle support (ELS), patching, upgrades, updates, security response team with quick fix time SLAs, training, mentoring and Professional / Consulting Service offerings, a rich partner ecosystem and certified hardware and software stacks? If any of above are non-existent you need to question as to why and ascertain whether the OSS provider offers solutions you can rely on to run business critical functions.
Product components
Are all product components open sourced or do specific components (eg connectors, add-ons, plug-ins) which you require, need a proprietary license to run? What happens if the license expires? Can you reboot and the software will still be running after the license has expired?
Upstream First
Does the open source software product provider give the fixes back to the community project? By doing so the community version will be in alignment with the product. Not doing so is basically forking and the power of community powered innovation gets lost as the community will produce a different, non-compatible version.
Any vendor who does not contribute enhancements back into the upstream project is not open source. Open Source is about give and take, not giving back to the community is just take.
Stability VS Latest Features
Does the Open Source Software provider guarantee backwards compatibility, migration paths or is there a risk of being forced to upgrade and re-engineer your business applications because the latest feature release is not backwards compatible? From a business perspective you want to be in charge when deciding when to upgrade and you should prefer a OSS provider who understands why stability is more important than the latest hype or feature.
Support life-cycle
How long is the OSS providers’ support lifecycle – 1 year, 3 years, 10 years? You don’t want to be forced to upgrade every 1, 2 or 3 years because bug fixes are only available in the latest version.
Community viability
In that instance size does matter. How large is the community behind an OSS project? Is the community perhaps just the vendor staff or is there a real non-vendor funded community driving the project innovations? How active is the community? What’s the commit frequency? How mature is the community behind a project?
If there is no real community behind an OSS project you are losing out on true community innovation power with slower innovation and bug-fix cycles than your competitor who uses a product which is based on large OSS community projects. This can affect your business innovation speed.
Project direction
Is the project taking on established industry trends and standards such as packaging formats (eg Docker), patterns (eg EAI), platforms (PaaS/CaaS) or orchestration frameworks (Kubernetes)? This will determine whether your investments in the OSS product are paying off in the mid/long term future. Remember innovation happens way faster than ever before and is accelerating.
Ecosystem / stack support
Does your OSS provider have other offerings up and down the software layer stack? Good News if they do, as it allows you to have a strategic partner you can work with and who understands your business, instead of calling many different vendors/service providers to get a production issue fixed, and none of them understanding the impact on your business or worse finger pointing and saying ‘not our fault’.
Subscriptions VS Licensing
Do you need a license for your OSS product to run or is the subscription you buy merely to access the OSS providers auxiliary services such as updates, patching, security fixes, support etc? This is important because requiring a license to run a OSS product is similar to a vendor lock-in you get with a proprietary software vendor. A subscription in the contrary does not lock you in and you are free to move anytime. Longer license terms (eg 3 to 5 years) also significantly affect the accounting and potentially the OPEX / CAPEX classification of your software assets depending on your industry specific accounting rules.
A subscription is deemed a services, and as such is classified as OPEX. A license on the other hand is considered CAPEX. This is an important distinction, since your organisation may find itself under considerable CAPEX pressure, while the ability to reduce your fixed OPEX costs through the strategic deployment of OSS subscriptions could result in some significant cost-savings.
This is where your missing budget for innovation can come from.
I hope that helps to select the right partnerships in the Open Source space to make your business thrive in the Digital Economy.
Please let me know if you want different aspects added.
First things first: A TOGAF certification alone, does NOT make anyone an Enterprise Architect. There you have it.
Inspired by a recent article on linkedIn (which seems to have been removed now) I started to think about how we can get back on track bringing clarity into the language around Architecture roles. At the end of the day, a role title should help you communicate what you do and what your circle of concern is. This in turn brings clarity and value to the organisation you work for.
Clear communications is key in all our relationships. With our family, kids and in business. Ambiguity and confusion makes communications and hence relationships more difficult, can cause stress and waste time and money. In IT we see lots of ambiguity and confusion created – either through marketing material or through people who want fancy job titles. I remember meeting my first ‘Enterprise Architect’ at the W-JAX in Munich in the 90’s. I loved his title, he was absolutely an expert, but at coding in Java 2 Enterprise Edition. His session didn’t talk about business strategy, competitive analysis, market forces, business capability roadmaps or sales & operational planning processes. See where I am coming from?
The ambivalence in IT can be mind numbing – SOA, Cloud, API, Architecture roles are just a few to name. Marketing departments wanting to create a ‘me too’ in a specific field even though they don’t have an offering, which then creates confusion. An example is when back in 2006 AWS started to have ‘Cloud Computing’ conversations, simply utility based pay-per-use computing resource consumption. Then VMWare came along wanting to be a ‘me too’ coining the term ‘Private Cloud’ for something that was merely infrastructure virtualisation. And voila: ambiguity created! The result? Today you need to spend 15 minutes if you talk ‘Cloud Computing’ just to establish the context: IaaS, PaaS, SaaS, Private, Public, Hybrid or do you just mean infrastructure virtualisation really – and there are plenty other examples out there.
The author of above article on linkedIn has done a stellar job, up to the point where he introduces the term ‘Enterprise Solution Architect’ which is exactly the reason why the confusion started in the first place.
‘Enterprise’ Architects, meaning architects with the word ‘Enterprise’ in their title are plentiful, TRUE Enterprise Architects are only few. I meet many so called ‘Enterprise’ Architects throughout the year, unfortunately most of them lack
Involvement in implementing their companies’ strategy
Involvement in strategy workshops
Meetings with executives on a regular basis to discuss strategic initiatives
A business capability or strategic roadmap
Involvement in business transformation programmes – neither actively or as a steering committee member
Rarely understand more than 1 or 2 business and technical concepts on a 101 or 201 level (eg S&OP, Order to Cash, accounting, infrastructure, integration, information modelling, business process modelling or applications) to present the necessary architectural options and ramifications for the business.
Architects who lack above characteristics simple DO NOT make an Enterprise Architect, they are perhaps Technical Architects working in an ‘Enterprise’ instead. Technical Architects look after applications, data stores or infrastructure for example.
Developing into an Enterprise Architect requires multiple years of learning all different aspects of the business and the industry. I don’t see any other way a seasoned EA getting her job without being prepared to be the dumbest person in the room – in many meetings, over many month or even years. The resilience, ability and willingness to learn and doing so is paramount – cost centres, deferred COGS, SOX compliance, consignment notes, recovery, pick release, ship confirm, POD, service route optimisation, freight cost, sales and operational planning – there’s lot to learn.
Here is an attempt to categorise Architecture roles graphically:
In summary, the important points in my view are:
Solution Architects are allrounders and the vital and necessary glue to deliver successful business outcomes
Solution Architects overlap with all other architecture roles. The reason is clear: In order to solution the right business outcomes you need to understand the Enterprise Context, strategic roadmaps, industry and technology well enough to have a meaningful conversation with both the business and the technologists in your organisation with the aim to ensure successful delivery
There can be other Architecture roles focussed on Information, Security, Data or specific Domains
There is no need to confuse things with the word ‘Enterprise’ such as ‘Enterprise Solutions Architect’. All solutions, projects or capabilities life automatically within the Enterprise Context
Organisations are different, which means above categorisation is not set in stone but a guide. The scope of each architecture role can be more or less focussed on aspects such as projects, technology, business, strategy etc. There can be many reasons for this, number of architects or the lifecycle stage of an organisation (see STARS Framework ) for example.
Using above categorisation instead of business card titles helps me better understand where architects are sitting within the Enterprise context, their circle of influence / circle of concerns, what topics to prepare for, how to best communicate and the areas I can learn from my interlocutor (I had to Google that word first :)).
Mobile First was yesterday, well sort of. It is still true that you don’t need to execute massive enterprise transformation programs or backend system (ERP, CRM, HR, etc) modernisations before you develop your Mobility enterprise capability.
However the learnings from the early Mobility adopters is that the management (not the build and test!) of more and more mobile apps becomes exponential (not linear as expected, that is) more expensive. Different technologies, different deployment approaches, etc. In simple terms: some out of the box mobile apps that ship with your shiny HR system and the in-house developed field service mobile app which is seen as a competitive differentiator are just…well…different, and hence need some different tender love and care. That’s when a Mobility Platform strategy comes in handy.
Once you have determined that you best ride with a mobility platform (either due to the Gartner rule of 3) or due to any other means such as a business case, I have documented things to consider.
As my colleague Wayne B. once said every topic needs an iceberg and here’s the mobility iceberg. It’s by no means complete, but it shows you that the actual mobile app is only a tiny component compared to what Enterprises need to look out for when deploying and managing mobile applications. And that is true for both consumer facing and internal employee apps. With or without BYOD or a defined MAM/MDM approach.
Market Research
Have I got a good overview of the market, e.g. through Forrester, Gartner, blogs and forums?
Peer review in other organisations
How do Digital Transformation agents in other organisations address this topic?
Hybrid Infrastructure Cloud / Hosting
Can the platform incl design and run-time elements be easily moved between on-prem, public infrastructure and support both deployments (hybrid) at the same time across Dev/Test/UAT/PROD?
How can the number of platform infrastructure nodes be extended or decreased to support scalability requirements? Is it dedicated or multi-tenant?
Licensing / subscriptions
What is the cost structure regarding users / nodes / applications. How does on-boarding of new applications and/or users or the need for more compute / memory / storage affect pricing? Are there any user, back end services or app restrictions. What is the definition of a user (mobile end user, developer, tester etc)?
Software/solution Development Lifecycle (SDLC)
How do web-scale/cloud native applications move through the SDLC? Is it code or binary based? Do I have a choice? Can I implement emergency deploy scenarios? Can I use web-hooks to trigger builds? Can I do a Source to Image build?
CI / CD
Does the platform support my CI / CD processes and pipeline? How will it integrate?
API
How powerful is the API? What does it support in terms of build/design and run-time (DevOps)? What DevOps, CI/CD processes does the platform support – is there a restriction? How powerful is the API?
Mobile App – Target Platforms
Can you develop and manage native, hybrid and web apps? How are web apps hosted – do you need additional servers?
Frameworks
What mobile application development frameworks are supported? Is there a restriction on what can/cannot be supported? How are those frameworks updated and what’s the frequency of those updates? What happens if new frameworks come out – can those be onboarded easiliy onto the platform?
MDM – Mobile Device Management
What exisiting MDM suites are supported? HOw does integration with MDM solutions work? What is pushed to those MDM solution – code, binaries?
Distribution
How are mobile applications distributed? Is there a private AppStore, QRCode/URL for downloading apps for testing easily? Are Apple, Google, Microsoft, Blackberry app stores supported? How can the platform integrate with those platforms?
Governance
Is it compatible with our internal project/release processes and the associated project governance and delivery model?
Collaboration & Project Level Isolation
How can project teams members collaborate across the necessary dev & test roles across UX/UI design, native Android/iOS, Hybrid framework, web, business logic and back end integration ?
How are users, teams, backend services, repositories, code and applications isolated or shared on a per project, app and backend service basis and across applications? How can people & teams collaborate local and remotely?
Tooling
How can the platform support existing development, test, deployment tools and tool chains? Do I know what the existing or target build/deploy process looks like? Does the platform force any specific tools?
Source Code
How is source code managed (SCM) around the platform? What existing SCM components are currently being used within my organisation?
Backend System Access and Integration
How are existing Enterprise services and integrations accessed on a per application, project or user bases?
How does the platform support existent architectural concepts such as Microservices, SOA, transactions (ACID/BASE), APIs?
Bandwidth / Throttling
How does the platform manage bandwidth constraints on mobile networks?
Re-usability
What is the level of re-usability across applications, code, patterns, reference architectures, libraries, corporate repositories and services?
Data backend / Storage
What data backends are supported out of the box? How are non-out of the box data backends supported or integrated?
Business Case
What is my business case timeframe? How is my ROI calculated? What is the TCO over the x years of timeline? Do I need capitial funding or can I run a OPEX model? Is there a license + 20% maintenance or annual subscription pricing?
Support
Which part of the build/run-time stack is supported by the vendor, i.e. Cloud infrastructure, Operating System, application platform, application run-time? Is the vendor support enterprise ready and 24/7?
Vendor Stability
Is the vendor financially stable? Is it self (profit) or VC funded? What’s the revenue/profit per year? What do the financial analyst rate the vendor? How long has the vendor been around? How does the vendor go about R&D and select the next new PaaS features?
References
How many existing customers are there? How many success stories are there?
Application Deployment – different version and version upgrades
How does the platform support different versions of an an mobile app and services connecting depending on functionality and compatibility?
Architecture – support and constraints
How does the platform support your reference architectures (eg backend service integration, APIs, business logic and front end? Is my desired deployment architecture (onPrem, multi-AZ, multi region) supported?
Data Security
What data security (in transit / at rest) is supported and how?
Authentication / Authorisation
What protocols are supported? Is MFA supported? Are API keys supported? Can I registere and identify specific devices?
Compliance
How does the platform support the necessary compliance requirements such as SOX, ISO27001, Common Criteria?
Existing skills and change management
How can the platform support re-use of existing skills sets and help minimise the organisational change management component? Does the vendor provide trainings (on-site, online, class room) and certifications?
Implementation support
Does the vendor offer implementation services? Does the vendor have a strong partner eco system? How expensive and available are those resources in the market place?
Scalability
How does the platform allow to scale in high-demand and off-peak scenarios? What work load density is realistic and supported?
Technology
Are there any specific technologies mandatory I need to train on to use the platform?
Platform run time
How many compute nodes, memory, CPU, storage does the platform need to run on based on my load scenarios?
Platform Architecture
What are the architectural components of the platform? Are there any proprietary components that lock me in?
As of late, I find myself involved in interesting conversations around ‘Digital Disruption’. Eventually we will have to drop ‘disruption’ as change, constant learning and clear communication will be the new normal for businesses to survive.
We can change culture through using different tools. Enablers to instill a culture and mind set of fast change are concepts like DevOps and PaaS. Buzz word bingo aside, for me that means web-scale, cloud-native application architectures, development and deployment process readiness, multiple deliveries per day, CI/CD tooling, tool chains and executive sponsorship to ‘Make it Happen!‘ (which really means to shortcut the organisational change management bureaucracy, politics and internal stakeholder management efforts, which are ultimately roadblocks and threats to the survival of a company in the digital age). And that’s exactly what we see in our customer base who are successfully undertaking Digital Transformation.
From there on forward it becomes obvious that we want to be ‘no worries’ as much as possible about the layers underneath whatever makes up a customer consumable service/function/feature/application to be developed and operationalised (DevOps). Below an example of what I call an Enterprise Ready Container Reference Architecture.
The only reason you want PaaS is to make your life easier. Easier can mean making you faster, more scaleable, reliable and/or with higher levels of quality. Nowadays it’s not the big eating the small, it’s the fast eating the slow, for breakfast, lunch, dinner and dessert at your next all-inclusive holiday accomodation you booked through Air BnB (which made the agent you booked through last time go out of business). Therefore what you do not want is to invest in proprietary technology (increases risk of lock-in and technical debt) or having to deal with technical issues you didn’t have to think about before you went down the DIY PaaS path (for example container security, orchestration, scalability algorithms, sourcing secure container images or container networking).
A PaaS conversation has many different angles to it, hence it’s absolutely vital to see through empty Marketing promises, get a comprehensive picture and focus on what’s important to your organisation. A PaaS should be fit for purpose for your business model. Your architecture should not be driven by product features nor vendors (search also for opinionated, structured and unstructured PaaS) but by your business needs.
Because there are many offerings out there that call themselves ‘enterprise ready’ even when they are not, I compiled a list of questions to ask when choosing your PaaS.
Feedback is always welcome.
Enjoy,
Andreas
PS. The best way to move forward I believe is to define a Minimum Viable Product from front end/API to backend integration and see the entire SDLC in action around your PaaS. Marketing slides, thick strategy papers and multi-month planning cycles are not a focus area within the successful Digital Transformation programmes I have witnessed.
Market Research
Have I got a good overview of the market, e.g. through Forrester, Gartner, blogs and forums?
Peer review in other organisations
How do Digital Transformation agents in other organisations address this topic?
Hybrid Infrastructure Cloud / Hosting
Can the run-time be easily moved between on-prem, public infrastructure and support both deployments (hybrid) at the same time across Dev/Test/UAT/PROD?
How can the number of platform infrastructure nodes be extended or decreased?
Licensing / subscriptions
What is the cost structure regarding users / nodes / applications. How does on-boarding of new applications and/or users or the need for more compute / memory / storage affect pricing?
Software/solution Development Lifecycle (SDLC)
How do web-scale/cloud native applications move through the SDLC? Is it code or binary based? Do I have a choice? Can I implement emergency deploy scenarios? Can I use web-hooks to trigger builds? Can I do a Source to Image build?
CI / CD
Does the platform support my CI / CD processes and pipeline? How will it integrate?
API
How powerful is the API? What does it support in terms of build/design and run-time (DevOps)? What DevOps, CI/CD processes does the platform support – is there a restriction? How powerful is the API?
Application – Target Platforms
What target platforms does it support natively? How are web applications scaled on port 80/443?
Frameworks
What application development frameworks are supported? Is there a restriction on what can/cannot be supported.
Cloud Management & Monitoring across PaaS, Containers and IaaS
How do you manage your heterogenous IaaS providers (AWS, onPrem, OpenStack, Google, VMWare) and containers through a single pane of glass? Do you have/need a consolidating monitoring solution?
Distribution
How are the applications exposed to the public? Across different geos?
Governance
Is it compatible with our internal project/release processes and the associated project governance and delivery model?
Project Level Isolation
How are users, teams, backend services, repositories, code and applications isolated or shared on a per project basis and across applications? How can people & teams collaborate?
Tooling
How can the platform support existing development, test, deployment tools and tool chains? Do I know what the existing or target build/deploy process looks like?
Source Code
How is source code managed (SCM) around the platform? What existing SCM components are currently being used within my organisation?
Backend System Access and Integration
How are Enterprise services accessed on a per application, project or user bases?
How does the platform support existent architectural concepts such as Microservices, SOA, BASE, API Management and Enterprise Service Bus?
Bandwidth / Throttling
How does the platform manage bandwidth constraints?
Re-usability
What is the level of re-usability across applications, code, patterns, reference architectures, runtime images, libraries, corporate repositories and services?
Data backend / Storage
What data backends are supported out of the box? How are non-out of the box data backends supported or integrated? What storage options do I have? Is my choice of storage supported? Is storage replication supported? Is storage assigned per application, project, container or platform wide?
Business Case
What is my business case timeframe? How is my ROI calculated? What is the TCO across entire lifetime? Do I need to capitalise the cost or can I run a OPEX model – do I have a choice?
Support
Which part of the build/run-time stack is supported by the vendor, i.e. Cloud infrastructure certification, Operating System, container run time, orchestration engine, application platform, application run-time? Is the vendor support enterprise ready and 24/7? What are the vendors response/fix time SLAs?
Vendor Stability
Is the vendor financially stable? Is it self (profit) or VC funded? What’s the revenue/profit per year? How do the financial analysts rate the vendor? How long has the vendor been around? How does the vendor go about R&D and select the next new PaaS features? Is the vendor Enterprise and/or consumer/developer focussed?
References
How many existing customers are there? How many success stories and references are there?
Application Deployment – different version and version upgrades
How does the platform support different versions of an application connecting to different end points depending on functionality? Are blue/green deployments supported? How can I roll back a failed deployment?
What programming languages are supported and how can you add additional languages to the platform?
Architecture – support and constraints
How does the platform support your reference architectures (eg backend service integration, APIs, business logic and front end, BASE, SOA) while observing loose coupling and high cohesion? Is my desired deployment architecture (onPrem, multi-AZ, multi region) supported? How is automatic data replication supported across multiple nodes in different geographies? What storage options do I have?
Security
What data security (in transit / at rest) is supported and how?
What run-time stack security is available from Operating System, platform, to container? Who is patching security issues?
Authentication / Authorisation
What protocols are supported? Is MFA supported?
Compliance
How does the platform support the necessary compliance requirements such as SOX, ISO27001, Common Criteria?
Existing skills and change management
How can the platform support re-use of existing skills sets and help minimise the organisational change management component? Does the vendor provide trainings (on-site, online, class room) and certifications? Will I create a proprietary skill set or is there an Open Source community available to me?
Implementation support
Does the vendor offer implementation services? Does the vendor have a strong partner eco system? How expensive and available are those resources in the market place?
Quality of Service – Scalability & Clustering
How does the platform allow to scale in high-demand and off-peak scenarios? What work load density is realistic and supported?
What elements are looking after QoS concerns and how mature and supported are those?
Technology
Are there any specific, non-standard technologies mandatory I need to train on to use the platform?
Platform run time
How many compute nodes, memory, CPU, storage does the platform need to run on? What is the level of workload consolidation?
Platform Architecture
What are the architectural components of the platform? Are there any proprietary components that lock you in?
Networking
Is the networking architecture flexible, e.g. is Software-Defined-Networking utilised, if so is the implementation proprietary or supported by a large and active community?
Orchestration
How is container/microservices orchestration implemented? Proprietary or standards based?
Applications
Are there vendor supported and certified container image registries available? What does the update/notification mechanism look like if new images with bug/security fixed become available? Which components of my application stack is supported (Business rules, data grid, Data Virtualisation, application server, API management, Mobility) and maintained (security fixes, upgrades, patches) by the vendor?
Just recently, I found myself getting involved again in more and more conversations around Enterprise Integration. Integration is nothing ‘old’ or ‘outdated’ I learned. It’s crucial still to optimise your business processes, reduce your cost base, be agile and run a profitable business.
As a matter of fact, newer paradigms like Big Data, Internet of Things, Predictive Analytics, Data/Information Services, Data Virtualisation, Data Firewalls, Hadoop or Microservices actually rely on a good integration architecture to provide business value.
Many conversations seem to be driven religiously around technical / syntactical (meaning SOAP vs ReST vs Messaging) integration. I think this is only the 2nd best way to approach integration.
Keeping business scenarios in mind often brings clarity to what the best way is to ‘do’ integration. Thinking holistically from Business Event generation upstream, the business process involved and all the way down to your Big Data lake to provide reporting and analytics is key to make the right architectural decisions.
Here are short videos on Service Orientated Architecture & Enterprise Integration in general by John Schlesinger:
Courtesy of J. Schlesinger – who I had the pleasure to meet while working in the banking industry on Globally Scalable Enterprise Integration Architectures (and who I regard as my guru in the realm of Enterprise Integration) I am publishing his great papers here:
Have we achieved what we were set out to achieve with our Enterprise Architecture Repository (EAR)?
Have we created value through the EAR?
Have we evaluated the products according to what we value now as important?
Usage Scenarios
We have currently a team of around 30 – 40 people using our EAR to model our business processes and 6 architects creating architectural artefacts across the TOGAF Domains (BDAT).
Business Process Modelling
The process modellers work on our ERP transformation program or on separate business process optimisation projects.
Lessons Learned: We are glad that we have an EAR where our business processes have a predefined home. Equally important is however rolling out training to new starters and people who have done process modelling the ‘old’ way. Most important is to have an Enterprise wide process framework in place to fit all the different projects and programs related business processes together. Without a framework you will only ever end up in point-solutions or project focussed views with no chance to look at business processes across the Enterprise as a whole.
Human Resource Requirements
Due to the extended usage of our EAR we have now 3 EAR Administrators instead of the one admin resource initially. This is due to the higher workload of course, but what it requires is that all System Administrators and ‘Power Users’ share the same knowledge about the foundational principals such as meta-model, library structure and how core features like impact analysis work.
Furthermore other profiles with advanced access rights have to share a common set of guidelines and mapping standards to create EA assets in a consistent way. For example: Knowledge about our APQC PCF, our modelling standards, our versioning standards, the difference between AS-IS and TO-BE.
Access Control VS Roll-Out & Critical Mass
With external consultants coming in, half-knowledge about our EAR has proven dangerous: To satisfy reporting requirements the consultant introduced new relationships altering the meta-model in an inconsistent way. Over 6 month later we are still cleaning up those meta-model inconsistent changes. Giving too much access is probably a cultural aspect, which might not happen in other organisations, but in the initial phases you might struggle to get the critical mass together for an Enterprise wide roll-out, or don’t know the administration/access features well enough, so it’s good to be aware and specifically lock meta model altering access rights down.
Impact Analysis
The architecture team is often tasked with examining ‘what-if’ scenarios.
For example: What if only a single module goes live instead of the entire solution? Questions like this can still not be answered through our EAR. Even though you could customise the meta model to help answer those questions, it would require tremendous discipline to model projects to that level of detail right from the start.
Keeping things consistent
We have a tightly knit architecture team – all the architects sit in the same room as well, which makes it relatively easy to apply the same methodology or synch across the team. However if you don’t have this luxury, it might be good to define tool specific modelling standards before you roll out your EAR. Making changes to existing artefacts is always harder than doing it Right right from the start.
Most important product features
Over 1 year into the deployment of our EAR the following features have proven most important:
Report generation and information export
Web browser access with access for everyone in the organisation (using enterprise license, instead of seat license). The saying ‘Architecture is only alive if you can see it’ is very true. You need to be able to share EA assets and views with everyone. Exporting Visio diagrams or PDFs is not going to work as you are constantly updating your artefacts – remember Enterprise Architecture is a living thing. Being able to send out web URLs to specific documents or document versions has proven really useful – no outdated content anymore.
Visio user interface – Change management and onboarding has been fairly easy given that mostly all modellers have had previous experience with Visio
Access Control based on user profiles such as
EPMO Business Analyst – Read Access to all projects across the Enterprise project portfolio, CRUD access to business related artefacts, mostly processes in the TO-BE and AS-IS libraries
Business Architect – same as BA, but across all TOGAF domains
Project Analyst – restricted access to only specific project related folders
Domain Architect – Same Access as Enterprise Architect
Enterprise Architect – Full access, close to System Admin access
System Admin – Full access to user, library, meta model and fundamental repository configuration
Portal User – read only access to all content
Library & folder restructuring – We have now several times restructured our library folder structure to satisfy demand and ease of use across
AS-IS and TO-BE gap analysis
APQC process framework roll out
Project based access
Avoiding confusion for people with specific project based usage requirements
Creation of project and Enterprise wide views
So…
Would we have still examined the same features as we did originally during our vendor selection if we knew what we know today?
Yes and No. Yes, because all the features we looked at are important still. No, because we have learned what other features have proven useful and what’s really important and hence additional features would be added and the weighting would be different across all features.
After several re-curring forum conversations I thought I’d could be helpful to document the steps taken in order to procure and establish our Enterprise Architecture Repository (EAR).
Of course this is by no means the only way to find the right EA Repository for you but it worked for us and we are still very happy with our choice.
Context
Probably like many other organisation we started off creating our EA assets in Microsoft Word, Excel and Visio diagrams and stored it on shared file servers and our Document Management System DocuShare.
The problem with this approach – as you know – is that there is no underlying content meta model which semantically links the architecture artefacts. The consequence is that analysis needs to be done manually. You can write macros in Visio, Word & Excel but I don’t think that is time and effort well spent for an Enterprise Architect.
The Team
To get a broader view of requirements across the business I assembled a team comprising the following:
2 Enterprise Architects
2 Solution Architects
2 SOX Compliance Officers
1 National Quality Assurance Officer
Due to many conflicting projects and activities and only 1 Enterprise Architect being the ‘business owner’ of the EA Repository, we ran into several conflicting resource schedules. As you can only objectively score if you sit through the presentations and demos of all vendors, that has been a challenge.
Fortunately, one of the Solution Architects and the National QA Manager were really dedicated, so that we ended up with 3 different scores we could average. I recommend to involve also an IT Operations representative, so that the requirements of the Application Portfolio Management component are represented if that’s a use case for your EAR within your organisation.
The Process
You won’t get it 100% right. 1 year down the track we are using the EAR in ways we didn’t think of, but that’s only a good thing as we are reaping rewards beyond what we have envisioned.
After high level requirements gathering, the process we followed was:
Market Research & Product Shortlisting
Requirements lock-down & Weighting
Product Demonstration & Scoring
Market Research & Product Shortlisting
The company had a ‘all you can eat’ arrangement with Gartner and Forrester Research. That made it easy to execute a quick market research. We also talked to fellow Enterprise Architects and opted to include one product which wasn’t on the Gartner Magic Quadrant.
Gartner and Forrester have quite a comprehensive selection of papers on this topic. The documents we found most helpful were:
Gartner: Understanding the Eight Critical Capabilities of Enterprise Architecture Tools
Gartner: Select EA Tools Use Cases Are Not Optional
Gartner: Magic quadrant for Enterprise Architecture
After reading through the documents, I had scheduled a call with a Gartner Analyst on that topic to clarify my understanding. I asked specifically why a tool like Orbus iServer is not mentioned in the Magic Quadrant paper as it has been recommended to us from other Enterprise Architects and we knew that Cathay Pacific is using it, too and that they are happy with it.
I learned that the Magic Quadrant selection process also includes things like disclosing the product roadmap to Gartner, Gartner specific certifications and customer references. Not all of those have been satisfied by Orbus (trading under Seattle Software) and hence it didn’t make it into the Magic Quadrant. For us not a strong enough reason not to look at this product, especially after it came with strong recommendations and it was fully compatible with our existing EA assets which have been created with the Microsoft Office Suite.
The Magic Quadrant for us looked as per below screenshot at the time of evaluation. I recommend to get the latest report from Gartner if you’d like the latest view.
The Product Shortlist
After a first high level evaluation of the products in the market, research papers and recommendations we shortlisted the following products (vendors):
ARIS (Software AG)
Abacus (Avolution)
iServer (Orbus)
At first, alfabet was not on the shortlist. Software AG has just had acquired this product through the acquisition of planningIT. The Software AG technical representative offered an introduction and demonstration at short notice which fitted our schedule, hence we agreed to have a look at it as well. After the demo it was clear that this product is not what we are looking for in an EA Repository due to its rigidity of the prescribed process and the absence of a content meta model. I also downloaded iteratec’s iteraplan for a quick evaluation but found the tool not very user friendly.
Requirements Lock Down & Weighting
The evaluation group defined the evaluation criteria categories and weighting as follows:
ID
Description
AVG Weight
1
Repository & content meta model – capabilities & fit
8.8
2
Modeling – support for business process and EA modelling
9.4
3
Gap Analysis & Impact Analysis – ease of use, capabilities
8.4
4
Presentation – automatic generation & capability
7.2
5
Administration – system and user access administration
6
6
Configurability – usage, access rights, output (not including content meta model)
6.8
7
Frameworks & Standards support – e.g. TOGAF, eTOM, BPMN, reference architectures
6.6
8
Usability – Intuitiveness of UI and administration
8.4
9
Adoption/Change Management – effort to roll-out and adopt
9
10
Fit for Purpose (Use case eval, risk, compliance, business requirements, customer centricity)
9
11
Extensibility / Integration ability with other systems
7.4
12
Vendor: Interactions – Responsiveness, quality, detail, professionalism, support
Gartner Research results & recommendations for suitability
4.6
The weight semantics were defined as: 0 – Irrelevant; 1 – Insignificant; 2 – Fairly unimportant; 3 – Somewhat unimportant; 4 – Nice to have (e.g. ease of use); 5 – Nice to have (increased productivity/efficiency); 6 – Somewhat important; 7 – Important; 8 – Fairly important; 9 – Very important (represents key requirements); 10 – Critical/extremely important (failure to satisfy requirements in this category will cause elimination of product)
Our Requirements
ID
Category
Description
1
10
Repository must be shared and accessible by all EA practitioners, Solution Architects, Business Analyst and Business stakeholders
2
1
Must allow for customised meta models
3
10
Existing assets (.ip – process files & visio diagrams) need to be converted and linked into meta-model
4
10
Built-in Version Control
5
11
Integration/Linkage with requirement system
6
11
Integration/Linkage with other systems WIKI, DocuShare, FileFolder
7
8
Must be able to deal with a large number of artefacts (10,000+) & performance tuning options
8
2
Must be able to understand & analyse complex (ontop, links, semantics of an relationship, 1:n, m:n) relationships between artefacts
9
2
Support Scenario (what-if) planning & scenario modelling
10
4
Support multiple/different stakeholder viewpoints & presentations
11
2
Facilitate implementation of business strategy, business outcomes and risk mitigation
12
2
Repository supports business, information, technology, solution and security viewpoints and their relationships. The repository must also support the enterprise context composed of environmental trends, business strategies and goals, and future-state architecture definition.
13
2
Modelling capabilities, which support all architecture viewpoints (business processes (BA), solution architecture (SA))
14
3
Decision analysis capabilities, such as gap analysis, impact analysis, scenario planning and system thinking.
15
4
Presentation capabilities, which are visual and/or interactive to meet the demands of a myriad of stakeholders. Presentations can be created via button click.
16
5
Administration capabilities, which enable security (access,read,write), user management and modeling tasks.
17
6
Configurability capabilities that are extensive, simple and straightforward to accomplish, while supporting multiple environments.
18
7
Support for frameworks (TOGAF, COBIT, eTOM), most commonly used while providing the flexibility to modify the framework.
19
8
Usability, including intuitive, flexible and easy-to-learn user interfaces.
20
2
Draft mode before publishing edited and new artefacts
21
1
Supports linking Business Motivation model ((Means)Mission, Strategy, tactics >>> (Ends) Vision, Goals, Objectives)
22
2
Needs to support multiple iterations of TOGAF (Architecture Capability, Development (AS-IS, TO-BE, Gap), Transition, Governance iterations)
23
2
Support for multiple notations(Archimate, UML) connecting semantics to the same content meta model
24
10
Repository Search and Browse capability for the entire organisation
25
3
Creation of Roadmaps
26
3
AS-IS, Transition & TO-BE state based gap analysis across processes, information, technology, Business reference models, application architectures and capabilities
27
10
Reverse-Engineering/Introspection Capabilities for Oracle eBusiness Suite/ERP
28
6
Ease of Editability of meta model relationships
29
2
Support for linking Strategic, Segment & Capability Architectures across architecture models, processes and roadmaps
30
6
Ease of Editability of meta model objects & attributes
31
3
Strategic, Segment & Capability architectures need to be referenceable across all models, building blocks and projects
32
8
Lock-down/Freezes on changes
33
8
Role based edit/view/read/write access restrictions
34
5
Administration & Configuration training will be delivered by vendor
35
10
Price within budget
36
3
Supports “is-aligned-with-roadmap” analysis via button clickc
37
7
Supports library concepts (lock down/freeze) for reference models, refrence architectures, Architecture/Solutions Building Blocks
38
9
Vendor has proven capabilities/support for change management efforts associated with the roll-out of a EA tool/repository
Integration with Portfolio Management tools and Project Management tools
43
12
Alignment of what FXA needs with Gartner analysis
44
12
Provides technical/customer support during Australian business hours
45
12
Vendor pays attention to FXA requirements and business environment and build demos, questions & dialogues with FXA around it.
46
13
Must have different role-based user access levels for read, write, administration and public (browse) for different types of assets
47
13
Must not allow users to sign up for inappropriate level of access without permission
48
13
Writes access logs for successful, failed logon and user profile/role change logs
49
10
Supports the modelling, documentation, query & analysis of Customer Touchpoints
The Result
Once we finally received a quote we realised that it was beyond our budget hence we had to remove ARIS from the shortlist.
After use case demonstrations from the remaining vendors the evaluation team scored independently and came up
Abacus TOTAL 2399 iServer TOTAL 3582.2
This concluded the evaluation and made Orbus iServer a very clear choice for us.
Next Steps to Consider
Decide a content meta-model (TOGAF VS Archimate)
Repository structure & library setup to support automated roadmaps and gap analysis and to support projects
Import Application catalogue (application & interfaces, live date & status) (AS-IS)
Import existing EA assets (AS-IS and TO-BE): processes, Business Functional Reference Model, data models
Things to be aware of – Before you jump
Resourcing: There will be people necessary to administer, maintain and continuously update your Enterprise Architecture repository. Whenever there is a a large change coming which impacts your EAR, you need to understand that this can be a full time job for a little while
Licensing: Make sure your business case caters for growth and additional licenses. In case of iServer you need Visio and iServer seat licenses.
Training: Ensure you got a team that you can work with to roll out training. Especially across different domains: Business (BPMN, process modelling guidelines) and meta model extensions (eg Interfaces, RICEFW) and the correlating relationships.
Publish guides and reference material (we found a WIKI most useful!)
Standards & Reference models: You will have to spend time and effort to define your own standards (eg subset of BPMNv2.0 or APQC PCF)
Organisations know that Data Quality (DQ) is a key foundation for a successful business. Business Intelligence, Reporting, Analytics, Data Warehouses and Master Data Management are pretty much wasted effort if you cannot trust your data. To make matters worse, systems integration efforts could lead to ‘bad’ data spreading like a virus through your Enterprise Service Bus (ESB) across all systems, even into those which had fairly good data quality initially.
This article discusses the architectural concept of a Data Quality Firewall (DQF) which allows organisations to enforce their data quality standards to data in-flight and anywhere: Up-Stream, In-Stream and Down-Stream.
Data Quality Lifecycle
When data enters an organisation it is sometimes referred to as ‘at the beginning’ of the data quality lifecycle. Data can enter through all sorts of different means, e.g. emails, online portals, phone, paper forms or automated B2B processes.
Up-stream refers to new data entering, whereas in-stream means data being transferred between different systems (e.g. through your ESB or Hub). Down-Stream systems are usually data stores that contain already potentially unclean data like repositories, databases or existing CRM/ERP systems.
Some people regard a Data Warehouse to be at the end a data quality lifecycle, meaning there is no further data quality work necessary as all the logic and rules have been already applied up, down or in-stream.
However if you start with your DQ initiative you need to get a view of your data quality across all your systems, including your data warehouse. You achieve this through profiling. Some software vendors offer ‘free’ initial profiling as a conversation starter, maybe a worthwhile first step to get your DQ indicators.
Data Quality Rules and Logic
Profiling vital systems allows you to extract data quality rules which you can implement centrally so that you can re-use the same rules and standards enterprise (or domain) wide. The profiling equips you with data quality indicators, showing you how good your data really is, on a per system basis.
Understanding your business processes and looking at the data quality indicators, enables you to associate a $-value with your bad data. From this point onwards, it’s probably very easy to pick and choose which systems/repositories to focus on (unless your organisation is on a major strategic revamp, in which case you need to consider the target Enterprise Architecture as well).
Another question always was when and how to control the quality of the data. In the early days we began to implement data quality with input field verification, spell checks, confirmation responses and Message standards (e.g. EDIFACT). Organisations then found themselves duplicating the same rules in different places, like front-end, middleware and backend. Then field length changes come along (a la Year 2000 or entering global markets or through mergers and acquisitions) and you had to start all over again.
At the last APAC Gartner conference in Sydney I heard people suggesting that the data quality rules only need to be applied to the warehouse. I personally think this can be dangerous and needs to be evaluated carefully. If there are no other systems that store data apart from the warehouse this might make sense. In any other case it means that you cannot trust the data outside the warehouse.
Zooming In – The Data Quality Firewall
A DQ firewall is a centrally deployed system which offers domain or enterprise wide data quality functionality. The job of this system is to do cleansing, normalisation, standardisation (and possibly a match and merge if part of MDM).
In an Event Driven Architecture (EDA) all messages are simply routed through the data quality rules engine first. This is done by the DQ firewall being the (possibly only) subscriber of all originating messages from the core systems (#1 in the image). Subsequently, all the other interested systems subscribe to the messages emitted from the DQ firewall which means they are receiving messages with quality data (#2).
The diagram shows the Core Systems as both publishers and subscribers, emitting an event message (e.g. CustomerAddress) which is picked up by the Semantic Hub. The Semantic Hub transforms it into the appropriate message format and routes it to the Data Quality Firewall. The DQF then does it’s job and re-emits the CustomerAddress message with qualified data, which is then routed to the subscribing systems via the Semantic Hub.
Subscribing systems can be other systems, as well as the system that originally published the data.
In an SOA scenario the architecture is similar, using service calls to the appropriate service offered by the Data Quality engine. Care needs to be taken if the DQ service is required to partake in transactions (see time-outs, system availability, scalability, etc for more details).
Your New Data Quality Ready Target Architecture?
Benefits of a centrally deployed Data Quality engine are re-usability of rules, ease of maintenance, natural consolidation of rules, quick response to change, pervasiveness of change, assignment of ownership and responsibility to Data Stewards (who owns which rules and the associated data entities).
Things to regard are feedback mechanism (in case of bad data) to the originating system as it might affect current system design or the introduction of a Data Quality/Issue Tracker Portal which allows Data Stewards to intervene in cases were it cannot be done automatically.
The overhead which distributed approaches like a input field validation on multiple systems can have, makes a central Data Quality Firewall architecture by far more Enterprise ready, delivers more long-term benefits and is cheaper in terms of setup and maintenance and ROI.
The Small Bang Approach
The beauty of the EDA approach is that you can easily change the routing on a system-by-system or message-by-message basis through the Data Quality Firewall. You simply change the routing of the messages in the routing table of the Semantic Hub.
Example of message type ‘CustomerAddress’ emitted through SystemA below:
System B and C are subscribing to CustomerAddress messages emitted from SystemA.
Message Type
Content Based Routing
Subscribers
CustomerAddress
Xpath(/Originator)=’SystemA’
B, C
Account
Xpath(/Originator)=’SystemC’
A
To enable the Data Quality Firewall functionality we change the subscriber to DQF. From then on all CustomerAddress messages from SystemA are delivered to the DQF. After the data quality rules have been applied by the DQF SystemB and C are then getting clean and trustworthy data. The Account data remains unchanged in this example.
Message Type
Content Based Routing
Subscribers
CustomerAddress
Xpath(/Originator)=’SystemA’
DataQualityFirewall (DQF)
CustomerAddress
Xpath(/Originator)=’DQF’
B, C, A
Account
Xpath(/Originator)=’SystemC’
A
A possible next step could then be to quality control the Account message data. This approach allows you to consolidate your Data Quality step by step across your entire organisation.