cancel
Showing results for 
Search instead for 
Did you mean: 

The Kindle Migration Affair

0
0
Data migration

"It’s not how it was before."
"Err, it should be. The data’s just the same and your books are available under ‘Cloud’."
"I don’t want to use this ‘Cloud’ thing. I want to see them under the "Read" and "Unread" Collections. That way I know what’s read and unread." 
“Right. But they’re still there. ‘Read’ and ‘Unread" still exist."

Some poking takes place and realisation dawns. That, and remembering why I don’t do tech support for friends and family. It’s all very well being a consultant and giving advice to clients, but to your parents your opinions are still as naught…

“Mum, this is the same data as before, but do you know that not everything is filed into ‘Read’ and ‘Unread’? Some books are in both and many aren’t in either."
I waited for the expected response, "Are you sure?" Pause. "Have you checked?"
"Yes. I’m certain." 
"So what will you do now?"

This was when it dawned on me. This was when I realised how The Kindle Migration Affair was so familiar...

I’ve spent a lot of time recently advising new and prospective MEGA customers and I’m regularly asked about migrating data from another format into HOPEX – clients need or want to migrate from other tools or documents into a rigorous, repository-based tool for all the right reasons. On every occasion, the stern warning of my colleagues and I is that, yes, we can import data from legacy systems, but do they actually want to do that? If they do, does the benefit justify the cost and effort?

In theory it’s very easy to import data from other systems into HOPEX. You “simply” understand the source concepts, map those to MEGA concepts, test the import scripts and then perform the import. Simple. Only it’s never nearly that simple.

In fact, the most important part of any migration into HOPEX isn’t the format data comes in (Visio, Excel, other EA or Risk tools), but how consistently it’s constructed in that source format. Consistency means that concepts can be reliably mapped, links can be reliably made or calculated. Once those mappings are understood decisions can be made about the route to a fast, successful migration. The trouble is that people are never migrating their data to a new tool because they don't know and understand all the mappings. Normally a new tool is being sought because the client has lost control of the models in their legacy source. The models are out of date, people know that the mappings aren’t correct and that a quarter of the data was based on a set of assumptions not recognised as incorrect. At this point I normally give two choices – abandon the data and re-model from scratch or re-scribe the models, re-drawing them into HOPEX using some rule-of-thumb mappings but MEGA best practices. Surprisingly, this second option if often as fast and as economical as specifying, building and testing a “catch all situations” import macro. It also has the value of every single model being looked at and appraised by a human being.

So in summary, HOPEX has the ability to import from almost any format with even the slightest amount of structure to the data, but the most important question when considering importing legacy data into the tool is really “Is this good enough quality to be worth importing?” – And this is an area that MEGA can provide advice on to make sure all objectives are met.

To round off my story, as I write this my mother’s still sitting upstairs going through 500+ Kindle books 'cleansing the data’ and making sure that every one of those titles is in the correct Collection. At some point, and with this affair behind us, we all hope to live happily ever after...

 

Comment

"It’s not how it was before."
"Err, it should be. The data’s just the same and your books are available under ‘Cloud’."
"I don’t want to use this ‘Cloud’ thing. I want to see them under the "Read" and "Unread" Collections. That way I know what’s read and unread." 
“Right. But they’re still there. ‘Read’ and ‘Unread" still exist."

Some poking takes place and realisation dawns. That, and remembering why I don’t do tech support for friends and family. It’s all very well being a consultant and giving advice to clients, but to your parents your opinions are still as naught…

“Mum, this is the same data as before, but do you know that not everything is filed into ‘Read’ and ‘Unread’? Some books are in both and many aren’t in either."
I waited for the expected response, "Are you sure?" Pause. "Have you checked?"
"Yes. I’m certain." 
"So what will you do now?"

This was when it dawned on me. This was when I realised how The Kindle Migration Affair was so familiar...

I’ve spent a lot of time recently advising new and prospective MEGA customers and I’m regularly asked about migrating data from another format into HOPEX – clients need or want to migrate from other tools or documents into a rigorous, repository-based tool for all the right reasons. On every occasion, the stern warning of my colleagues and I is that, yes, we can import data from legacy systems, but do they actually want to do that? If they do, does the benefit justify the cost and effort?

In theory it’s very easy to import data from other systems into HOPEX. You “simply” understand the source concepts, map those to MEGA concepts, test the import scripts and then perform the import. Simple. Only it’s never nearly that simple.

In fact, the most important part of any migration into HOPEX isn’t the format data comes in (Visio, Excel, other EA or Risk tools), but how consistently it’s constructed in that source format. Consistency means that concepts can be reliably mapped, links can be reliably made or calculated. Once those mappings are understood decisions can be made about the route to a fast, successful migration. The trouble is that people are never migrating their data to a new tool because they don't know and understand all the mappings. Normally a new tool is being sought because the client has lost control of the models in their legacy source. The models are out of date, people know that the mappings aren’t correct and that a quarter of the data was based on a set of assumptions not recognised as incorrect. At this point I normally give two choices – abandon the data and re-model from scratch or re-scribe the models, re-drawing them into HOPEX using some rule-of-thumb mappings but MEGA best practices. Surprisingly, this second option if often as fast and as economical as specifying, building and testing a “catch all situations” import macro. It also has the value of every single model being looked at and appraised by a human being.

So in summary, HOPEX has the ability to import from almost any format with even the slightest amount of structure to the data, but the most important question when considering importing legacy data into the tool is really “Is this good enough quality to be worth importing?” – And this is an area that MEGA can provide advice on to make sure all objectives are met.

To round off my story, as I write this my mother’s still sitting upstairs going through 500+ Kindle books 'cleansing the data’ and making sure that every one of those titles is in the correct Collection. At some point, and with this affair behind us, we all hope to live happily ever after...