Welcome to Blog EN - Business & IT Transformation

cancel
Showing results for 
Search instead for 
Did you mean: 

Dr. Strangelove or: How I Learned to Stop Worrying and Love ... Scorecarding

0
0
blog-illu-itsd-dr-strangelove.jpg

I’m an old movie buff, and I was watching – again - Stanley Kubrick's movie Dr. Strangelove…, when it occurred to me that even in strategic nuclear warfare there are rules, unwritten as they may be. It was the statement of Dr. Strangelove: "Of course, the whole point of a Doomsday Machine is lost, if you keep it a secret! Why didn't you tell the world, EH?” In the movie, the "Doomsday Machine" is capable of destroying all life on earth, and is set to go off automatically if the Soviet Union is attacked. So in this instance, that rule is: One must effectively communicate capabilities, such that all players involved would recognize the eventuality of ‘mutually assured destruction’ as dissuasive.

The overall game itself is simple: If you do this, I'll do that, threat counter-threat; each knows where the other stands. Keep in mind, the end game (the point) of mutually assured destruction is to maintain parity - largely because the outcome of nuclear warfare would be unacceptable for any player. Thus, your foe never gains a strategic edge. It could be argued the goal is stalemate. If the goal is stalemate, or stability, then the players are bound (read as, there’s a rule) to somehow communicate their ability.

However in Dr. Strangelove, that tit-for-tat was up-ended by one player in particular (we won't get into General Ripper's culpability): the Soviet Premiere Dimitri Kisov. The gaff of Premiere Kisov was in not revealing the existence of the Doomsday Machine - actually, he was waiting to announce it "at the Party Congress on Monday”. He didn't follow the rule. Here stated differently: if you're looking to avoid being beaten, and you possess the means to thwart it, then you need to communicate that. Otherwise, you are laying a trap you may not want to spring.

And so, it brings me to the point of modeling rules when performing Enterprise Architecture (EA) activities such as depicting how an organization or a system works, and ensuring the purpose of the rules is understood and the rules are followed. A lot of time and thought is put into the EA planning and increasing the value of EA, but what about ensuring that the data is accurate, timely and reliable? Or, more to the point, that the data is entered the correct way with the proper meaning? If not, the outcomes of all that 'truth', if you will, could mean little if data isn't correctly aligned, and eventually communicated in the language of the stakeholder(s).

Here is where scorecarding comes in: By implementing a scorecard system where actual EA data is audited and reported, an organization can then score that data against expected business outcomes, and assess the contribution of applying modeling rules. Through this, the organization empowers its architects with the ability to feedback to key personnel the quality of its own data - where that data is incomplete or misaligned. Presumably the data will be fixed and other corrective measures would be taken. Because, not being able to tie models to demonstrable business value equates to demonstrating that THEY DELIVER NO business value.

This leads to an opportunity for introspection:

  • Are the rules communicated effectively?
  • Are the rules being broken because they are misunderstood?
  • Are the rules themselves problematic? Should they evolve?
  • Is the means of data input an issue?

The idea is to keep users aligned with the modeling rules as much as possible, and to build a rule set that favors productivity and business value. So, when it's time to run simulations or act on that data, you’re less likely to step into your own trap.

Comment

I’m an old movie buff, and I was watching – again - Stanley Kubrick's movie Dr. Strangelove…, when it occurred to me that even in strategic nuclear warfare there are rules, unwritten as they may be. It was the statement of Dr. Strangelove: "Of course, the whole point of a Doomsday Machine is lost, if you keep it a secret! Why didn't you tell the world, EH?” In the movie, the "Doomsday Machine" is capable of destroying all life on earth, and is set to go off automatically if the Soviet Union is attacked. So in this instance, that rule is: One must effectively communicate capabilities, such that all players involved would recognize the eventuality of ‘mutually assured destruction’ as dissuasive.

The overall game itself is simple: If you do this, I'll do that, threat counter-threat; each knows where the other stands. Keep in mind, the end game (the point) of mutually assured destruction is to maintain parity - largely because the outcome of nuclear warfare would be unacceptable for any player. Thus, your foe never gains a strategic edge. It could be argued the goal is stalemate. If the goal is stalemate, or stability, then the players are bound (read as, there’s a rule) to somehow communicate their ability.

However in Dr. Strangelove, that tit-for-tat was up-ended by one player in particular (we won't get into General Ripper's culpability): the Soviet Premiere Dimitri Kisov. The gaff of Premiere Kisov was in not revealing the existence of the Doomsday Machine - actually, he was waiting to announce it "at the Party Congress on Monday”. He didn't follow the rule. Here stated differently: if you're looking to avoid being beaten, and you possess the means to thwart it, then you need to communicate that. Otherwise, you are laying a trap you may not want to spring.

And so, it brings me to the point of modeling rules when performing Enterprise Architecture (EA) activities such as depicting how an organization or a system works, and ensuring the purpose of the rules is understood and the rules are followed. A lot of time and thought is put into the EA planning and increasing the value of EA, but what about ensuring that the data is accurate, timely and reliable? Or, more to the point, that the data is entered the correct way with the proper meaning? If not, the outcomes of all that 'truth', if you will, could mean little if data isn't correctly aligned, and eventually communicated in the language of the stakeholder(s).

Here is where scorecarding comes in: By implementing a scorecard system where actual EA data is audited and reported, an organization can then score that data against expected business outcomes, and assess the contribution of applying modeling rules. Through this, the organization empowers its architects with the ability to feedback to key personnel the quality of its own data - where that data is incomplete or misaligned. Presumably the data will be fixed and other corrective measures would be taken. Because, not being able to tie models to demonstrable business value equates to demonstrating that THEY DELIVER NO business value.

This leads to an opportunity for introspection:

  • Are the rules communicated effectively?
  • Are the rules being broken because they are misunderstood?
  • Are the rules themselves problematic? Should they evolve?
  • Is the means of data input an issue?

The idea is to keep users aligned with the modeling rules as much as possible, and to build a rule set that favors productivity and business value. So, when it's time to run simulations or act on that data, you’re less likely to step into your own trap.