Tuesday, 10 March 2009

Analytic Workspace Manager Locking

So, following on from my previous blog about plugins for the AWM, I thought I would provide a little information about something I learned whilst writing my own plugin.

Writing a plugin for the AWM and testing it can be cumbersome if you have to constantly start the AWM, login and then navigate to a node in the tree that allows you to activate your plugin. No, arguably a better way is to create your own testbed to simulate the AWM environment. That way you can run your code with less effort and interaction and are also afforded the ability to step through and debug your code if, like me, you need to.

In my case I merely created a Java class that effectively created an Oracle database connection and then called my plugin code, passing in the connection and anything else that it required. This was all fine and dandy whilst developing the code and afforded me the luxury of being able to test my code with the minimum of effort. However, when I eventually placed my code into the plugin framework (which to be fair I was doing simultaneously anyway) my code, well, didn't really stop working but rather, well... just seemed to stall.

This was masked by the fact that I was using multiple threads and thus parts of the plugin, though still functioning, arguably produced incorrect results. After a little investigation (okay, okay, so more than a little and certainly more than I would have liked) I discovered the reason...

There appears to be no fine-grained locking in Oracle OLAP, certainly as far as the AWM is concerned.

So what do I mean by that? Well, just that it is geared towards a single developer updating a given analytic workspace. Multiple updates to different parts of the workspace simultaneously are not supported nor, as far as I can tell, even possible.

So what does that mean to us? Well, when the AWM connects to the analytic workspace it attaches it in "read/write" mode. This means that any plugin code can only affect the contents of that analytic workspace by using the connection provided to it when the plugin is invoked. Retrieving/creating a new connection using the data source made available in the Map object presented to the plugin means that any code executed over that connection is in fact in a different session. As such, any code intended to affect the same analytic workspace that was attached by the AWM session created when the analytic workspace was attached will be blocked and will not be able to make changes or even process any objects in that analytic workspace.

At first I thought that that was it, game over but there is a "trick". This "trick", albeit a bit of hack and one not to be taken lightly, involves the use of the DBMS_AW package . More specifically, the use of the DETACH, UPDATE and ATTACH sub-programs allowing us to detach the analytic workspace over the main connection provided by the AWM to the plugin code. We can then "re-attach" it on the new connection in read-only mode (or read/write if you want) and perform whatever actions you would like on the given analytic workspace. The UPDATE sub-program is required in order to commit any changes that have already been made to the analytic workspace whilst it was in read/write mode prior to detaching it.

Note that there is a potential issue here. Detaching the analytic workspace and re-attaching it introduces the risk that in the meantime another session locks the workspace before the plugin secures it for its own use. Also, failure to ensure that the analytic workspace is attached in read/write mode before the plugin exits breaks the environment in which the AWM expects to function. (i.e. The AWM still "thinks" that the workspace is attached in read/write mode and will allow a user to try to make changes even though this may not be the case.)

Clearly the functionality that this provides needs to be weighed in conjunction with the expected working environment and the associated risks. If the environment is inherently single-user then no harm is done. However, if there is a multi-user environment where multiple people are likely to request read/write locks on the same analytic workspace, well, then your mileage may vary. Clearly you may be able to use transactions to mitigate some, if not all, of these risks but that remains to be seen.

As mentioned previously, oh how nice it would have been to be able to lock a specific dimension or cube etc. which, based on the metadata stored in the database, appears quite possible but just not implemented. This could have opened up opportunities for multi-user development environments to allow teamed development, maintenance and testing etc. What a novel concept, though sorely lacking here...

2 comments:

  1. It's because under the AW name it's just an Oracle Express database which has never supported multi-write access ( and so never had to worry about write-locking).

    In my OFA/Express days this was a frequent question...

    In fact you could "accidentally" get Express to support multi-write (via the Express ClearLockPage setting in Instance Manager), but it was always a really BAD idea as you corrupted the database..

    Express has been around for 40 years in various guises, so I don't suppose they're going to change it now!

    Steve Knowles

    ReplyDelete
  2. Sure, back in the IRI Express days before Oracle almost killed it in 9i, it was a really great product, one of the best if not the best.

    In 10g and certainly now 11g it has finally been resurrected. I guess my comments with regards the locking relates to the changes that have been implemented, such as clustering using RAC and integration into the Oracle kernel etc.

    With all of that work I wouldn't have thought that multi-write access would be that big a leap. I'm guessing bigger changes are required than first meets the eye...

    ReplyDelete