Living with your Enterprise Architecture Repository – A Recap

This article examines questions such as:

  • Have we achieved what we were set out to achieve with our Enterprise Architecture Repository (EAR)?
  • Have we created value through the EAR?
  • Have we evaluated the products according to what we value now as important?

Usage Scenarios

We have currently a team of around 30 – 40 people using our EAR to model our business processes and 6 architects creating architectural artefacts across the TOGAF Domains (BDAT).

Business Process Modelling

The process modellers work on our ERP transformation program or on separate business process optimisation projects.
Lessons Learned: We are glad that we have an EAR where our business processes have a predefined home. Equally important is however rolling out training to new starters and people who have done process modelling the ‘old’ way. Most important is to have an Enterprise wide process framework in place to fit all the different projects and programs related business processes together. Without a framework you will only ever end up in point-solutions or project focussed views with no chance to look at business processes across the Enterprise as a whole.

Human Resource Requirements

Due to the extended usage of our EAR we have now 3 EAR Administrators instead of the one admin resource initially. This is due to the higher workload of course, but what it requires is that all System Administrators and ‘Power Users’ share the same knowledge about the foundational principals such as meta-model, library structure and how core features like impact analysis work.
Furthermore other profiles with advanced access rights have to share a common set of guidelines and mapping standards to create EA assets in a consistent way. For example: Knowledge about our APQC PCF, our modelling standards, our versioning standards, the difference between AS-IS and TO-BE.

Access Control VS Roll-Out & Critical Mass

With external consultants coming in, half-knowledge about our EAR has proven dangerous: To satisfy reporting requirements the consultant introduced new relationships altering the meta-model in an inconsistent way. Over 6 month later we are still cleaning up those meta-model inconsistent changes. Giving too much access is probably a cultural aspect, which might not happen in other organisations, but in the initial phases you might struggle to get the critical mass together for an Enterprise wide roll-out, or don’t know the administration/access features well enough, so it’s good to be aware and specifically lock meta model altering access rights down.

Impact Analysis

The architecture team is often tasked with examining ‘what-if’ scenarios.
For example: What if only a single module goes live instead of the entire solution? Questions like this can still not be answered through our EAR. Even though you could customise the meta model to help answer those questions, it would require tremendous discipline to model projects to that level of detail right from the start.

Keeping things consistent

We have a tightly knit architecture team – all the architects sit in the same room as well, which makes it relatively easy to apply the same methodology or synch across the team. However if you don’t have this luxury, it might be good to define tool specific modelling standards before you roll out your EAR. Making changes to existing artefacts is always harder than doing it Right right from the start.

Most important product features

Over 1 year into the deployment of our EAR the following features have proven most important:

  • Report generation and information export
  • Web browser access with access for everyone in the organisation (using enterprise license, instead of seat license). The saying ‘Architecture is only alive if you can see it’ is very true. You need to be able to share EA assets and views with everyone. Exporting Visio diagrams or PDFs is not going to work as you are constantly updating your artefacts – remember Enterprise Architecture is a living thing. Being able to send out web URLs to specific documents or document versions has proven really useful – no outdated content anymore.
  • Visio user interface – Change management and onboarding has been fairly easy given that mostly all modellers have had previous experience with Visio
  • Access Control based on user profiles such as
    • EPMO Business Analyst – Read Access to all projects across the Enterprise project portfolio, CRUD access to business related artefacts, mostly processes in the TO-BE and AS-IS libraries
    • Business Architect – same as BA, but across all TOGAF domains
    • Project Analyst – restricted access to only specific project related folders
    • Domain Architect – Same Access as Enterprise Architect
    • Enterprise Architect – Full access, close to System Admin access
    • System Admin – Full access to user, library, meta model and fundamental repository configuration
    • Portal User – read only access to all content
  • Library & folder restructuring – We have now several times restructured our library folder structure to satisfy demand and ease of use across
    • AS-IS and TO-BE gap analysis
    • APQC process framework roll out
    • Project based access
    • Avoiding confusion for people with specific project based usage requirements
    • Creation of project and Enterprise wide views

So…

Would we have still examined the same features as we did originally during our vendor selection if we knew what we know today?

Yes and No.
Yes, because all the features we looked at are important still.
No, because we have learned what other features have proven useful and what’s really important and hence additional features would be added and the weighting would be different across all features.

7 Replies to “Living with your Enterprise Architecture Repository – A Recap”

  1. Thank you for sharing this information.

    Would you say this list somehow represents the most important requirements to evaluate – or do you have a more condensed “reccomended” list of requirements, from your current point of view?

    I understand you selected iServer – is the tool delivering on the most important requirements?

    I also saw the other blog post on selecting a tool. Do you have more information why the ones you shortlisted were selected? Why not Mega for example?

    We are early in our process of selecting a tool

    1. Hi Harald,

      That is the condensed list I am afraid. I have put the full list of evaluation criteria within my previous article called “Selecting your EA repository”.

      The tool is delivering very well on our most important requirements, but so can other tools. The enterprise license for the web portal has proven a wise investment as we can share assets with anyone in the organisation via a URL. In my view: Architecture is only alive if you can see it.

      I was totally unbiased going into the selection process. The first screening was based on budget and what the Gartner report said. Mega had limited user training options, limited flexibility in on-boarding leading to increased time for on-boarding, and was stating that users report that they ‘get lost’ within the tool.
      Based on the fact that I had a tight timeline to meet and 60+ users to train, Mega did not make it.

      Focussing on the use cases and stakeholders provides a lot of value. Also think of all the frameworks and standards you want to use and how your EA tool needs to support it. Let me know if you need anything else. Happy to share.

      Hope that helps & Good luck!
      Andy

  2. Thank you for sharing the information. You mentioned integration to a requirements management product. Did you manage to integrate i-Server with the product (are you using a specific product) and if there is not such a product in a company is it possible to do basic requirements management in i-Server based on your opinion?

    Does the reference to process modelling include all levels of the process architecture? If a company is using a BPMS product for process automation/execution would you recommend to keep the process models in i-Server and just export the relevant processes to the BPMS platform?

    Regards
    Dina

    1. Hi Dina,

      Sorry for the slight delay. I have reached out to Orbus in the meantime to get approval to post a link to their ‘Requirements Management with iServer’ video without requiring you to register on the iServer support web site.
      Here it is: https://player.vimeo.com/video/91716805

      To answer your other questions:
      //Requirements Management
      No, we did not use iServer or integrate iServer for Requirements Management. The programme team decided to manage the gaps initially in Excel (which was rather painful sending constantly updated versions around, or up/downloading them onto our content repository). We then switched to Microsoft Sharepoint, basically using the same Excel sheet. It worked somewhat OK, but lacked the connections between Goals, Drivers, Principles and Requirements, but you can extended an Excel sheet to include those as well if need be, as you know.

      //BPMS Suite
      Executable business processes often need a very specific level of detail or run-time configuration, which business process modelling tools can support but not necessarily enforce. With ‘enforce’, I mean business process ‘compile time’ checks, before those processes are deployed onto the BPEL or business process execution engine.
      I’d probably use a ‘sub-process’ to link to the graphical representation of that process on the execution server, otherwise you will have to maintain two models of the same process.

      If you mean our APQC framework with ‘reference to process modelling’ then the answer is No. If you model business processes according to such a framework on a project by project basis (like we did), it will take years to fully and adequately cover your business. Alternatively you can seek budget to model AS-IS processes as a separate initiative.
      You asking a very good question, level of details in business process modelling is a ‘hot topic’ and a ‘One Level fits all’ is not viable as some areas – mostly around the core business – require more depth. Another thing to consider is that the more detail you model the more often you have to change the process diagrams.

      Hope that helps.

      Thanks for your great questions!
      Andreas

  3. Andreas

    Thanks for the very informative update, not sure if the discussion pages are still live.

    I am in the process of introducing a new EA tool and establishing an EAR. My organisation has a combination of internal and external architects producing a wide variety of products. I was particularly interested in how you addressed governance and acceptance of new products/artefacts into the EAR. Did your organisation produce an Architects Handbook which describe the acceptance criteria into the EAR and at a more basic level the ‘ways of working’?

    Regards

    Billy C

    1. Hi Billy,
      Yes, we created an internal WIKI page which outlined the modelling standards and explained the (customised) meta model, artefacts and the repository structure (AS-IS, TO-BE/Transition Architectures etc).
      In terms of governance of architecture artefacts we had 2 Enterprise Architects who would mainly review the material, although all had training/mentoring.
      I think the trick is to not over-engineer the meta model and keep the number of anchor models and other artefacts small – especially at the beginning.

      Hope that helps & Thanks for your question!
      Andreas

      1. Andreas

        Thanks very helpful, I have already started to create a wiki and an ‘architects blog’ on our internal network.

        Limiting the number of models and artefacts makes sense and the identification of a minimum viable data set is something we are considering, we have our own internal release and acceptance processes to ensure quality of products. Keep It Simple Stupid seems to be the best approach.

        Billy C

Comments are closed.