cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Blob Storage in the RDB - SQL Server

gmarkow
Super Contributor

What is the compression function / algorithm that is used  to compress a 'text' attribute which is stored in the BLOB_BIN_VALUE column of the A_BLOB RDB table when BLOB_COMPRESSED = 1.

 

Thank you.

 

We are building reports directly against the Mega RDB to address the aggregate data analysis requirements which we have. We need to be able to use industry standard reporting and analysis tools against our Mega ITPM data.

 

We fully understand that the model is subject to change and this is unsupported.

 

 

 

6 Replies

RGenin
Trusted Contributor

Hi, 

Thanks for the answer!
Unfortunatelly this requires another license...

Best

Raphael

Hi RGenin,

 

I can't neccessarily elaborate on the Sql/Oracle implementation our developers incorporated into the application but I know we now offer a "DataMart" feature that better exposes the data in an environment for reference from third Party applications (etc.). Have you looked into that? If not, I would recommend looking it up in our online documentation and reaching out to you MEGA account manager.

DMart.PNG

I hope that helped!

 

JCAMARA

RGenin
Trusted Contributor

Hello, 

 

I was wondering if this topic has been further discussed at all ?

 

Best regards, 

Raphaël.

gmarkow
Super Contributor

Thank you for your reply.

Two questions: can yo share with us what Lempel-Ziv 3rd party library yo are using?

If not, the LZ varient name?

 

Also, why compress at all?  SQL Server and Oracle both compress BLOB's and large text fields?

 

Thank you!

gmarkow
Super Contributor

Hi,

 

Thank you very much for your response! 

I am sorry as I could not deduce your first name from your forum username of jftrianda.

 

I fully understand that Mega does not support directly accessing the RDB data.  We have no issue with that. 

 

Our reporting requirements are such that we would need to author tens of thousands of lines of Java programming to obtain the reports we need.  You have some good reporting capabilities in the tool, but most are built for 'immediate' and atomic analysis - all very good and very useful!

 

We are considering the Talend approach as well.  Does the Talend ETL ship with the tool?  I noticed that the generated Java code does.  Eventually, we will probably go with an ETL solution accessing the Mega objects via the Mega API.  We are well versed with most of the ETL tools.

 

However, one of our biggest frustrations here at AIG has been the inability to access the myriad of ITPM data using industry standard reporting and analysis tools!  We leverage a good number of those tools.  We are also gravely concerned with performance impact of reporting on the Mega IIS server. 

 

We were very successful in creating a ‘dynamic’ mapping layer of SQL Server views on the Mega RDB.   With some suggestions from another Mega customer who did something similar, we authored some SQL Server TSQL to dynamically create views on all of the Mega Object tables and the joiner (association) tables using the Mega MetaClasses related tables. 

 

You did some interesting and some puzzling things with your implementation in the RDB – again very elegant, but the evolution from your proprietary RDB to SQL or Oracle is evident.

 

I included a copy of the SQL which we use to extract all Persons and their Roles associated with an application.  You can deduce what we did in terms of naming convention.  We combined both our ‘production’ and ‘system’ Mega repository views in our ‘views’ RDB.

 

With respect to compression…

 

I did play around with both LZ, Huffman and a couple of LZ variants to determine what compression algorithm was being used.  Tedious.  A question I will pose to you, why don’t you simply leverage the RDB compression of large character fields?  Both SQL and Oracle perform excellent compression on BLOBs.

 

Can you tell me what LZ variant you used? 

 

Unless we can persuade Mega to expose the Mega MetaClasses (at least the ITPM ones) and MetaAssociations, our current approach will be an interim one.   We would like to get it to work.  Your CP3 changes to BLOB storage was quite interesting.  I would offer that you should try (from a performance and compression perspective) using the native RDB compression. 

 

Please know, I espouse the elegance of the Mega meta model implementation to all who will listen!

 

Below is a sample of the SQL we use to extract all Persons and their Roles who are associated with an Application object.  Easier than Mega API?  Probably not.  The general community knows much about reporting technologies.  Exposing this to them has tremendous value.

 

Now… Mega could automatically create these views for us (really, really easy), or supply Talend ETL for all of the ITPM objects, or implement Table Value Functions… basically PLEASE expose your data!

 

select

   a.IDABS as [Application IDABS]

,a.[EAP Global App ID]

,substring(a.[Generic Local name],1,patindex('%[[]%',a.[Generic Local name])-1) as [Application Name]

,p.Name [Application Owner Name]

,pr.Name as RoleName

,p.[E-Mail]

from Application_O_P a

   join [Person Assignment to Assigned Object_L_P] paao on (paao.[Assigned Object] = a.IDABS)

   join [Person Assignment_O_P] po on (po.IDABS = paao.[Person Assignment])

   join [Mega Assignable Element to Person Assignment_L_P] marpa on (marpa.[Person Assignment] = paao.[Person Assignment])

   join [Person_O_S] p on (p.IDABS = marpa.[Mega Assignable Element])

   join [Role to Person Assignment_L_P] rpa on (rpa.[Person Assignment] = paao.[Person Assignment])

   join [Person Role_O_S] pr on (pr.IDABS = rpa.Role)

 

All of our generated views filter on:

“where end_validity = 2147483647 and IDTRAN = 0”

 

Thanks!

 

 

 

 

 

 

 

 

 

 

 

jftrianda
MEGA
MEGA

Hi Greg

The official answer you already know is: "The direct access to SQL tables is not supported by MEGA. We do not give information to do so, because we do not guarantee any persistence of the technical implementation. Only the APIs allow you to access the actual data."

 

The algorithm used is a standard Lempel-Ziv compression, that MEGA creates/retrieves through a third party zip library.

Yet, several customers who attempted to build their own extraction tool to fill an external datamart have failed (some may have succeeded but we don’t know them).

 

The best and compliant way to extract data from MEGA Repository for data analysis is to configure an ETL, which will load MEGA data in an RDBMS repository using our APIs. .

 

We still recommend that you have a look to the ETL provided for MEGA Business Objects advanced reporting, that MEGA commits to keep up-to-date, and which, anyway, is based on Talend Open source.

It might give you a less risky, no coding way of getting data out of the repository. Yet, it works through the API’s what is much safer, but admittedly slower.

 

Thank you.