Recently, because I've become used to this, "big motorsport" has been leaving me somewhat left out and feeling that maybe it was mostly run for the benefit of its shareholders and the teams' sponsors.
So with the British Grand Prix approaching and with its usual build-up of politics and off-track discussions, I thought what better time to get some of my ideas in writing.
Off the back of this, I've found I've been following the electric race series, Formula E more closely. The organisers of this discipline of motorsport clearly make a conscious effort to engage with the fans as much as possible, starting from the week before a race when it's possible for a driver to earn a “FanBoost” - an extra surge of power that they can use during the race.
Off the back of this the drivers and race teams have to actively engage with fans to attract more voters and earn that important advantage. This continues right into and during the race.
Once the race is over, the Formula E organisers make sure that the podium ceremony is in a very public and accessible place in the circuit, with a long catwalk so as many ticket holders as possible can see and be as near the ceremony as possible.
What Formula E have done is understand that what, to them and the teams, is a series of very discrete phases and sets of processes in the lead up and over a race weekend is, to their customers, the fans, one journey from their initial engagement, their investment in the outcome of the race, the race itself and the conclusion of the event.
By understanding that the fans are one of their customers and that the currency by which a race series can attract fundamentally operate and then gain more revenue through sponsorship, media and other channels is via the number of viewers and engagements they can generate for those sponsors.
In other words, the customer's journey is important, and central to the success Formula E is having.
One part of the HOPEX V2 rollout is an enhancement of MEGA's ability to map the processes that support a customer's journey and then tie those touch points together to visualise and more easily tell the story. This can help bring real or virtual engagements from the customer's point of view that create a better experience for them as well as helping your organisation to understand and improve its customer-facing processes across the business.
... View more
Hi there, I've been asked if I can help answer your question, but I'm going to need a little more information:- What version of HOPEX are you using? What Licences / Solutions do you have available? When you say "Business Capability", do you mean the Capability MetaClass or the "Business-Capability-in-ITPM-which-is-actually-a-City-Planning-Area" MetaClass? (Yeah, sorry about that...) Also, what's the actual question you're trying to answer for your stakeholder? I hope that helps, Alan
... View more
What organisations are doing is creating a great big catalogue of “capabilities” at many different levels and layers of granularity. Lots of time and effort is put into this model’s creation and then the mapping to roles in the organisation, application lists, data and information etc. This is all good, but the exercise will show that some capabilities are “under served” and in need of improvement in some fashion. So the big pictures are washed with colours to show what’s been declared “good” and what’s been declared “bad” and in need of improvement.
There is an issue with this approach, and I feel that it is going to become more and more of an issue the longer in continues and as the transformations are implemented. That is that organisations are so busy trying to improve this introspective view of the organisation, what they have the theoretical ability to achieve, that they’re forgetting about the interactions that they may, or may not, have with their environment – their customers, suppliers, regulators – and the processes performed by their realisations of these magical capabilities.
Understanding your organisation’s capabilities is an important view in a holistic model of the enterprise, but it’s important not to become obsessed by the introspective and remember that what a customer, the source of your income and the party you should be endeavouring to “make happy” sees are the touch-points with your processes.
I’m aware that “process” is not fashionable in “thinking circles” at the moment – they’re complex assemblies and too many people have opinions on them and the pictures that get drawn – but they are vital to understanding the multi-channel, multi-modal world in which we all operate as both providers and consumers.
The end-to-end and the sequencing of the capabilities and their realisations are what the customer will see, and the voyage through them can only be understood if the appropriate processes are in place and understood.
On the subject of understanding capabilities and building a deeper understanding, MEGA will be at the Gartner Enterprise Architecture & Technology Innovation Summit in London in June – the event gives a great opportunity to consider what opportunity for growth and competitive advantage there is to be had through mastering and effectively planning digital business capabilities.
Having attended the conference in previous years, I know how useful it can be to build your knowledge and contacts base while enjoying some superb hospitality in the heart of London, and best of all it will most likely contribute positive food for thought as you consider the bigger picture of an organisational transformation project.
... View more
I understand what youre saying, but in the cases you describe then they're not the same Operation. It may be a very similar Operation, but it's not the same one as it uses different tools, may take and different time, have a different cost. If any of these are different or if any of the inputs and/or outputs are different then it's not the same Operation. Alan
... View more
No, I haven't but I think that's because if an Operation has/can have different needs then I'm not sure it's an Operation. I'm assuming an Operation is an indivisible unit of work with "One Person, One Place, One Time" as my steer. One could have multiple Systems Used associated to an operation, one System Used per possible set of needs, I guess. Alan
... View more
"It’s not how it was before." "Err, it should be. The data’s just the same and your books are available under ‘Cloud’." "I don’t want to use this ‘Cloud’ thing. I want to see them under the "Read" and "Unread" Collections. That way I know what’s read and unread." “Right. But they’re still there. ‘Read’ and ‘Unread" still exist."
Some poking takes place and realisation dawns. That, and remembering why I don’t do tech support for friends and family. It’s all very well being a consultant and giving advice to clients, but to your parents your opinions are still as naught…
“Mum, this is the same data as before, but do you know that not everything is filed into ‘Read’ and ‘Unread’? Some books are in both and many aren’t in either." I waited for the expected response, "Are you sure?" Pause. "Have you checked?" "Yes. I’m certain." "So what will you do now?"
This was when it dawned on me. This was when I realised how The Kindle Migration Affair was so familiar...
I’ve spent a lot of time recently advising new and prospective MEGA customers and I’m regularly asked about migrating data from another format into HOPEX – clients need or want to migrate from other tools or documents into a rigorous, repository-based tool for all the right reasons. On every occasion, the stern warning of my colleagues and I is that, yes, we can import data from legacy systems, but do they actually want to do that? If they do, does the benefit justify the cost and effort?
In theory it’s very easy to import data from other systems into HOPEX. You “simply” understand the source concepts, map those to MEGA concepts, test the import scripts and then perform the import. Simple. Only it’s never nearly that simple.
In fact, the most important part of any migration into HOPEX isn’t the format data comes in (Visio, Excel, other EA or Risk tools), but how consistently it’s constructed in that source format. Consistency means that concepts can be reliably mapped, links can be reliably made or calculated. Once those mappings are understood decisions can be made about the route to a fast, successful migration. The trouble is that people are never migrating their data to a new tool because they don't know and understand all the mappings. Normally a new tool is being sought because the client has lost control of the models in their legacy source. The models are out of date, people know that the mappings aren’t correct and that a quarter of the data was based on a set of assumptions not recognised as incorrect. At this point I normally give two choices – abandon the data and re-model from scratch or re-scribe the models, re-drawing them into HOPEX using some rule-of-thumb mappings but MEGA best practices. Surprisingly, this second option if often as fast and as economical as specifying, building and testing a “catch all situations” import macro. It also has the value of every single model being looked at and appraised by a human being.
So in summary, HOPEX has the ability to import from almost any format with even the slightest amount of structure to the data, but the most important question when considering importing legacy data into the tool is really “Is this good enough quality to be worth importing?” – And this is an area that MEGA can provide advice on to make sure all objectives are met.
To round off my story, as I write this my mother’s still sitting upstairs going through 500+ Kindle books 'cleansing the data’ and making sure that every one of those titles is in the correct Collection. At some point, and with this affair behind us, we all hope to live happily ever after...
... View more
Ah yes... "Variants of Process" is always a fun challenge. You've started out fine and created the Variant before adding the new Operation to the diagram. You've then selected the Operation to be removed, right-clicked, chosen "Replace", selected the Operation you've just added and then found that all that happens is a small red cross on the diagram. What you need to do now is to remember that the Sequence Flows in and out of the operation can only have a source and a target. They already have those because they're correctly defined. You need to add Sequence Flows to knit in the new Operation. If you change the existing ones you will also change the definition of "L'Original", which will be annoying. I forget if you can Replace Sequence Flows in the same way as Operations (my apologies, I'm working from memory and haven't MEGA on this computer), but if you can then you probably should. Once you've added/raplaced all that you need to add and replace then you should remove the replaced objects by cutting them (Ctrl+X) from the diagram. There's an extra complication when it comes to replacing Org-Units in Participants in Variants of Processes as the Participant that needs replaced as well as the Org-Unit. Because of all this complexity and how hard it is to actually do it correctly*, I tend to recommend agains my clients using variants of process except in quite specific circumstances. The complexity of "a Process" is what makes this hard, not the mechanics applied in the tool. That said, I hope this helps, Alan *I've been using MEGA hard for about 7 years now and am a Senior Consultant. I don't always trust myself to creat Variants of Process correctly...
... View more
Simon, I hope you're well. For you "Capability" is not the correct object to use. Instead, you're looking at a skill or ability within the organisation so you should use Business Function. If you then wish to describe how that skill or ability matures or has an incresed "delivery level" then you use Capability. The MEGA definition of "Capability" is very specific and based strongly on the military definition (as defined by DoDAF/MODAF/NAF) and its defining feature is its ability to have a lifecycle. You can therefore model statements such as "We will provide Ability A to Service Level1 until June 2016" , "We will provide Ability A to transitionary Service Level 1.1 from June 2016 until September 2016" and "we will provide Ability A to Service Level 2 from September 2016 onwards" where "Ability A" is your Business Function. Naturally, Sue can help you with this in more detail. Hope that helps, Alan
... View more
Hi, I've just looked at your sample diagram. This is a direct result of MEGA not being able to lie. By that I mean that if it has an association to an object on a diagram that it can show on a diagram then it will. What you need to do is drag that Application on the appropriate number of times and then hide (using the Hide tool on the Edit Toolbar - the one that looks like a pair of opera glasses) the links you don't want to be able to see. Tell me, are these Applications "Hosted" or "Criteria"? If they're Criteria then there is an excellent Analysis Report called "City plan Hierarchy" that will show you the hierarchy of City Planning Zones and then the associated Criteria, meaning you're less likely to need a diagram. I hope that helps, Alan
... View more
I saw this link to a blog in my newsfeed this morning and it made me laugh through recognition. Thought it was relevant to here too! http://weblog.tetradian.com/2012/10/10/it-depends/ Alan
... View more
Francis, Thanks for that and I'll be sure to pass on your regards to Jane! That's quite a complex idea they're looking at there and certainly an admirable aim. I guess it would be possible to associate the Methodology object to the "Variable Object" abstract metaclass but you'd really need a Product Engineer's opinon on how well that would work and what the knock-on effects would be. Just as a thought, what would happen to Projects underway is a change was made to a Mehtodology centrally? I'm really sorry, i'm not sure I can be a huge amount of help on this one. :smileyfrustrated: Kindest regards, Alan
... View more
Apologies for these sporadic gaps - the joys of travel and knowing that it takes more than 5 minutes on my phone to answer these... So, you guys are looking at what I wrote and saying "That's awesome, that should be written into every piece of MEGA documentation"* but I say "No, it shouldn't". Nor should a white paper be written based on it. Here's why: The decision which notation to use is normally taken long before a client every installs the tool. It's often taken by people who don't actually research or ask about this but only see "BPMN" and "YES" on the RFQ . Any debate about which to use is largely academic. What I've written above is a simple statement of how the notations have been defined. The BPMN notation characteristics are defined in BPMN so the assumption is there that for Process modeling if people are requesting BPMN they understand the implications. Trying to persuade people that they've bought the wrong thing when they've only just met you isn't good. Simon, this is where the value of the MEGA Consultant comes in. The combination of tool and business experience used to ascertain which is right simply can't be replicated. This is all the sort of thing that should be written into the modeling guidelines that so many organisations either don't bother with or believe they can conjure themselves. White Papers would perpetuate this. Every singe one of those documents will be different. Not just in layout and client name, but in outlining the steps and best practices that help solve their specific problem in their specific environment. Not to mention the occasional tool-based "curiosity". White papers get taken "as gospel" and clients try to fit their world to the specific instance in the paper. Libraries are a fine example of this - you cannot write a "best practices" paper on them as everybody uses them differently to view the world in their own perspective. The best you can say is "you could do x, y or z. I've seen a and b used before but your problem is different so maybe a combination of x and z would work. Let's try it..." This is where the much-maligned "MEGA Consultant" comes in and why the Modelling Suite isn't a "take box off shelf, install and use with no training" product. It requires some education in t he same way as any complex modeling too does. I come from a CAD background and I know that you can make a 3D CAD package do some nice stuff and make some nice pictures, but with a bit of training and advice you can make the CAD models really work for you to work out volumes, mould tools and packaging and really work for you and save you time and effort through the nuances that aren't apparent initially. Which brings me, finally, to Simon's question. You model as little as you can to answer your questions. I'm afraid I'm not going to definitively say which you should do. Take you're second question about which "Content transfer" notation to use on Application Environment diagrams. It depends. It depends on both the Process notation/methodology you're using and whether you want consistency between the two, how much detail you want/need to go into describing your Applications... I'm not trying to be difficult, but without the context required to give you good advice then I'm not even going to take a guess! I hope all of that helps. On re-reading it I probably comes across as a bit grumpy and curmudgeonly but I'm aware of the pitfalls that could come from me randomly splattering well-intentioned advice across the forum without understanding of the context. Please just remember that MEGA Modeling Suite is a big, complex tool but it's like that because with a little bit of effort and advice it will help you solve big, complex problems. Anyhow, I'm going to sign out of this thread. Cheers and have a good weekend, Alan *I may have exaggerated that a bit...
... View more
Apologies for the delay, I've been slightly busy for the last couple of evenings. Anyhow, I'm now sitting in a pub with a pint on my hand so I'm in about the right frame of mind to tackle this. Apologies if the typing gets worse as this continues, it means I've needed a second one to see me through... :wink: To start with Stijn's questions about best practices for Classic and BPMN messages... THE CONTENT OBJECT First up, a quick reminder from the MEGA Process and Achitecture training courses, which I'm sure you all remember word-for-word, of what we're dealing with... The "Content" object in MEGA is used to show pieces of information, material or financial renumeration (ie money) exchanged both within an organisation and between an organisation and its environment. The definition of Content is identical for both Classic and BPMN although there are some small differences in its naming between the two. CLASSIC MEGA Process Classic notation never lets you use Content in its "raw" form and it is always packaged within a "Message". A message consists of a single piece of Content associated to a Message object, a link from the source of the message and an other link to the target of the message. Messages are "an instance of the exchange of Content" and, as such, must never be re-used. They are specific to the diagram on which they are created. Note that "re-use" includes the copying and pasting of Messages between diagrams. MEGA prompts for re-use of Content in Classic Messages. Normally this should be accepted. On occasion a Message will be drawn between the same objects on multiple diagrams (normally on Business Process or Application Environment Diagrams). When this happens MEGA will prompt for re-use of the entire message. Where the context of the proposed re-use is ABSOLUTELY IDENTICAL then the message can be re-used (the tool assumes this, hence the behaviour), BUT I advise erring on the side of caution and creating a new Message. Should this be found to be duplication then the two different objects can be easily merged within seconds. Where it is found that a Message has been re-used in error then it can take a lot of time to "unstitch" and correct the error if it has been compounded. In MEGA it is easier to merge than separate so if in doubt create a new object. By default, the Message object will have the same name as the Content contained within it. Care should be taken to ensure that this remains the case as if they differ then confusion can occur. Messages have ONE source and ONE target, never more and never less. In Process Classic, the Content should be named to as to reflect what is being exchanged and its current state. For example, if a contract is to be signed as part of a process the content entering the appropriate step is "Unsigned Contract" and the content leaving the process step is "Signed Contract". Both of these would have to be independently associated to a Data Model or Entity (DM) describing the information structure of the contract. In MEGA Classic, Messages are the primary source of modelling issues. Care should be taken to ensure that Messages have the requisute number of senders and receivers (i.e, 1!) and that the names of Message and Content remain in harmony. MEGA provides a set of automated modelling regulations that can help ensure this remains the case. BPMN MEGA Process BPMN allows you to show Content exchanged in both its "raw" form and also packaged as part of a Data Object. Content can be associated with a Message Flow. Message Flows are directional and normally drawn in the direction of the major flow of Content. Message Flows can be associated to more than one piece of Content. The individual Contents may be shown to pass "Downstream" (i.e. in the direction of the Message Flow) or "Upstream" (i.e. in the opposite driection to the Message Flow). Message Flows are used in Process BPMN to show the flow of Content from a Participant outside the process object being described to a "Message" type event contained within the Process. Content can be associated with Data Objects. Data Objects can be created and referenced within a process description or associated to a Sequence Flow. A Data Object is the use of single Content in a specific context. A Data Object Name should be the same as the name of the Content. Data Objects allow the definition of the Status of the Content. This is normally displayed in tandem with the Data Object name. In the context of the example above, a contract being shown before a process step could be modelled with a Content called "Contract" having status of "Unsigned" before the process step and the same piece of Content "Contract" with Status "Signed" after the process step. these would show as "Contract [Unsigned]" and "Contract [Signed]". Not that any description of the data structure of "Contract" in this case would only need associated to one Content Object. A Sequence Flow may show the passing of more than one Data Object. Data Object shown on a Sequence Flow should pass in the direction fo the Sequence Flow. Best practice for simulation requires a process modelled with a single trigger Event and totally defined with probabilities of the various resulting sequence flows from Gateways every time the process could follow more than one potential route. All possible outcomes (Ends, handoffs to other process objects) should be mapped with correctly defined Events. Until you've mastered the above then meaningful Simulation would be best to wait. I think this covers both Stijn and Simon's questions (Simon, at this rate you're going to own me a drink when we meet...) but hopefully you can see that this is quite a comprehensive subject and I've only scratched the surface. Thie thing is that the thinking and implementation of this is very complex, but in use it's mostly automated and works with very few issues as long as the basic principles are understood. I can easily fill a whole day talking about this stuff and showing how it helps inform a comprehensive description of an Enterprise Architecture. Hope that helps, Alan
... View more
Sorry to ask a possibly-silly question, but why aye you trying to link Organisational Process to Business Function in this way? The reason the tool doesn't have this association as standard is because Organisational Processes are performed by specific people (Org-Units) as part of their implementation and not be the abstract skill/ability that is signaled by the use of a Business Function. It almost makes sense to have Functional Processes and Activities performed by Org-Units, but not the other way around. Of course, I'm saying this without sight of the context around why you're asking, but it does seem a rather worrying question to me... Of course, it could be that you've made a typo and meant to type "Functional Process"... I hope this helps, Alan
... View more
Well, if you're using the IT Service for more than one purpose, don't forget that rather than have to run around creating new MetaClasses to show different IT Service-related concepts then you can use IT Service by select from a drop-down to select its type - see the attached screen capture. This is good for reporting because it's very easy to write a query (or even write and save a query - don't forget how useful the Favourites navigation window is...) to list all Interface Services along the lines of:- Select [IT Service] Where [IT Service Stereotype] ="INTERF" It's also much easier to create a new type of IT Service than to create a new - but very similar - MetaClass. Of course, I'm writing this without any knowledge of your particular circumstances, so forgive me if it doesn't exactly answer your question but hopefully it's useful. Cheers, Alan
... View more
Folks, These are interesting questions and highlight that whilst Libraries are hugely powerful and useful, they do require some thought as to how they're going to be used in practice at the time of implementation and, really, to be able to give concrete advice requires more than a forum post... Firstly, a bit of a generic note about Libraries:- I've dealt with Libraries in a couple UK clients as well as using them in my own content and the most important thig to realise is that they are simply a method of segmentation and grouping content within the tool. Because of this the actual groupings used from client to client differe depending on how they see the world. Sometimes the company takes a "Zachmann-style" view of the world with segmentation between Contextual, Conceptual, Logical and Physical (as well as such classifications as "Transformation" ) and sometimes the line is drawn between disciplines so "Infrastructure", "Application Portfolio", "Process", etc. . Neither of these are right or wrong. What should be said though is that in neither case was the stable structure created on the first go - there were a couple of iterations where the initial structure was too simple and then it became too complex to be easily used and, well, now they work. So in summary, there is no right answer to libraries and there will always be some trial and error because the first thing you need to do is work out what questions you're trying to answer in your organisation and once you know the questions you can re-group to structure your world in a way that it does answer your questions. Phew... So, lesson over (:smileywink:) we can move on to the joys of migrating objects between libraries (jtinoco's original post). For this, let's imagine we have 3 libraries which we'll call "As-Was", "As-Is" and "To-Be". "As-Is" and "To-Be" are pretty self-explanatory, but "As-Was" is where retired objects and descriptions (diagrams) go. We also have a Library called "Transformation", which is where we keep our Project portfolio modelled using the "Project" object. This is the key to the whole solution. When deciding the scope/calculating the impact of your new project you connect the objects and descriptions to be replaced which, of course, are in the "As Is" Library to the Project Object and define them as Changed/Replaced/etc.. As you create the new content that's going to replace the current project scope then you connect them to the Project object saying they'll be introduced (you're free to choose your own words for these terms btw.). This all means that on Roll-Out Day you'll have two sets of objects in your Repository - The first is a bunch of objects in your "As-Is" Library connected to your Project. The second is a bunch of objects in your "To-Be" Library also connected to your Project. The first set of objects can (after the appropriate checks to ensure the content is correct) be moved to your "As-Was" Library and retired (but is still somewhere it can be re-introduced easily if needed - always have a back-out plan!) whilst the objects connected in the "To-Be" Library can be moved to the "As-Is" Library. Because you have that core connection that makes the scoping of what you need to move easily queryable this is much easier than before. You may find that rather than connecting every single object impacted by a project to the Project object you may be better linking the high-level Applications, Systems, Organisational Processes and Org-Units and using the concept of Boundaries, which is something that falls into the category of "Product Engineer Black Magic" and is outside my knowledge-scope. Simon adds an extra dimension of fun to all of this by asking about sequenced projects changing the same object. The approach I've outlined above helps with this as one should question if an object is marked as being impacted by more than one porject then maybe those project teams should be talking to each other (I know, I'm an idealist...) and you will know that the project witht he later release will be changing not the object as it currently is, but how it will be after the last release. In this case I would say that there's probably a case for a "Big, Distant Delivery"-type Library. This should CERTAINLY NOT be in a different repository because then you'll lose all the value of impact-analysis. That's been a monster, but I hope it helps, Alan edits to sort typos and grammatical errors
... View more