You are currently browsing the category archive for the ‘Database Administration’ category.

A beneficio degli studenti del 5° anno dell’ITIS di Terni che seguono il mio corso di Basi di Dati pubblico qui le slide che sto usando.

Sono fatte con Powerpoint 2007 e potete scaricarle con QUESTO LINK

ATTENZIONE : le slide sono in continua evoluzione quindi consideratele una versione “beta” : spesso le modifico prima di ogni lezione e, come potrete notare attingo a piene mani dalla Rete. Man mano che le modificherò cercherò di tenere aggiornato il documento… ma non garantisco tempestività nella pubblicazione .

Il mio consiglio è di usare Google ( cercando i termini in inglese ) , la docuentazione giusta ( spesso in inglese ) e fare esperimenti su un vostro DB installato in locale … o meglio ancora crearne uno condiviso su cui potete accedere tutti. Il motivo per cui insisto sull’inglese è semplicemente la (molto) maggiore disponibilità di buon materiale aggiornato.

Alcuni link che potrebbero tornarvi utili in proposito :

I wrote a lot of stored procs in pl/sql in my past working-life (before becoming a DBA ) … entire web sites. I wrote some Java code too, but I never mixed the two things in a non-canonical way (what I mean with canonical way is:  Java classes connects to DB and calls SQL or PL/SQL ) .

As a DBA I recently tackled the task to integrate Java stored proc into an Oracle instance … even if IMHO this is not a practice that should be abused ( pl/sql is more performing and equally powerful ) it is possible since the times when Oracle started to flatter Java ( ver. 8.1.5 , 1999) and there are cases when you simply have to do it  (e.g.  you need to integrate Filenet Java libraries into existing pl/sql code on a 2-tier  configured system ) .


So I did it… it took me some time but I did it … and I faced some difficulties and learned 4 lessons that  I want to share here:

  1.  You need to use “loadjava”  utility  to load the Java classes into the DB before using it … and you might need to resize SYSTEM tablespace and JAVA_POOL_SIZE.   Be very careful with the loadjava options : -resolve -verbose -genmissing  are useful   -force might be indispensable in case you have previously loaded classes to replace. ( see support DOC ID 156439.1 )
  2. You need to compile the Java classes for the VM version of the DB they will run on ( e.g. if you work on Oracle 10 then you need Java 1.4 )
  3.  If you want to create a pl/sql wrapper to a Java method this method must be a static function, otherwise you will get:      ORA-29531: no method MY_METHOD_EXISTS_YFB in class MY_CLASS_THAT_HAS_IT     … at runtime …  So you might need to code a top level class or wrapper with a static method.
  4. Logging and other authorizations : Oracle introduces a layer of security for Oracle Database Java Applications in order to grant to Oracle users the access to system resources… you can grant the permissions with dbms_java.grant_permission.  I learned from this blog (thanks Marcelo Ochoa) ,  that in order to use  java.util.logging package you need to bms_java.grant_permission( ‘SCOTT’, ‘SYS:java.util.logging.LoggingPermission’, ‘control’, ” );  other permissions you might need are: dbms_java.grant_permission(‘SCOTT’, ‘’, ‘/oracle/testfolder/myfile’, ‘read, write’);
    dbms_java.grant_permission(‘SCOTT’, ‘java.util.PropertyPermission’,’*’,’read,write’);
    dbms_java.grant_permission(‘SCOTT’, ‘SYS:java.lang.RuntimePermission’, ‘getClassLoader’, ” );

I hope someone will find this post useful and it will help to save some time.

I became fully aware of SQL injection reading this comic strip some years after I have been working as a developer ( mostly pl/sql , and probably this is the reason why the strip was so funny for me)  :

Programming most of my code in pl/sql and always taking care to use bind variables in all the other languages I came across, I always considered my code  free from SQL-injection risks… this until last week.

Thank to a clever colleague of mine, last week,  I came across this interesting article of Tom Kyte where he talked about pivoting tables using dynamic SQL and I realized that there could be SQL-injection risks even in  pl/sql  (see here for official Oracle doc about it).

I seldom used dynamic SQL … especially using data in tables … but, honestly, I could not swear I never did it.

So welcome  dbms_assert package that will save us from “little Bobby Tables“!

Let’s imagine you are a wise DBA that has to move some Oracle 10g databases lying on a RAC + ASM   to a new storage.

If you can use Google and Oracle support site you will not have problems : the online method suggested by BrianWood in oracle forums, based on ASM rebalancing works perfectly even in Oracle 10g (note that  the rebalance power ranges from 0 to 11 in Oracle 10g) and even for large disk groups (  I am talking about figures close to one terabyte  ).

To move OCR and Voting disks you can rely on Support note [ID 428681.1] … acually you could rely unless you are unlucky enough to have RAC installed on Win Server 2003 R2 64bit  (it was NOT my choice).

In this case, the unlucky DBA, after naming the OCR disk with guioracleobjmanager.exe and formatting with logpartformat will get the following error when invoking ocrconfig replace ocr \\.\<new_disk>    :

PROT-21: Invalid parameter

The problem here is that even when you commit and sync all nodes on guioracleobjmanager on one node, the other nodes are not fully aware of the new disk!

The only way to make them aware is restart the OracleObjectService (and so to stop and restart the crs ) on all the other nodes.

WATCH OUT: Do not  issue “ocrconfig replace ocr \\.\<new_disk>”  when any of the nodes is down  (I mean crs is down) otherwhise it will not be able to join the cluster anymore! ( I did it and I had to revert back to the old ocr disks to fix the problem)

I designed and implemented this solution one year ago to improve the communication between two systems based on distributed-heterogeneous DB : Oracle (10.2) RAC on the main system and SQL Server (2008R2) on the peripheral system located geographically far, in a place with lower bandwidth and 30ms ping time (120ms tnsping).

The SQL server application performed many accesses to the Oracle DB through ODBC and, because of the network latency and the overload of the original specs,  it could not meet the time constraints any more.

I decided to deploy a SQL Server replication to replicate the set of tables where the two applications communicate.

The main trouble with this idea is that you can not implement a “nonSQL“-transactional-updatable (I mean bidirectional) replication with MS replication technology  … in theory … so I decided to build it on my own tackling the ancient hatred between the two RDBMS.

Here is the core of my idea :

I setup a transmission replication with Oracle publisher and SQLServer subscriber, and a second receive replication with inverted roles. As you can see from the picture I had to duplicate the tables and create a copy in the Transmit schema and a copy in the Receive schema.

Note that I consider the TX and RX from the Oracle point of view … and this is not only because of my Oracle DBA background, but also because usually the Oracle application triggers the communication.

The red arrow in the image is the link between the tables in RX schema and TX schema : I realized it with triggers “for each row” after update/delete/insert : the triggers propagated the changes to the twin tables in the TX schema.

Of course I had to avoid the loop … and I did it modifying the triggers created by SQL Server on the TX tables : I simply inhibited the triggers when the user is the owner of RX tables… unfortunately this was not the only issue I had to tackle : here is a list of troubles that I encountered: 

  1. I could not use the same tnsnames connector for the Oracle publication and the Oracle subscription … I had to duplicate and rename it in the tnsnames.ora
  2. I could not initialize the subscriptions in the subscription-creation-wizard : I had to create the tables manually  on Oracle and SQL Server and copy the data with other methods (SQL Server , import data wizard … aka DTS)
  3. Date/time datatype mismatch : I had troubles on the time datatypes : on Oracle date  starts on 4712 BC and has a precision of 1 sec. .. on SQL Server datetime  starts on 1753 AD  and has precision of 1 ms … I had to filter dates older than 1753 in TX tables and use timestamp(3) in Oracle RX tables.
  4. Characterset issues : it seems that the replica with Oracle publisher has problem with some non ASCII characters when Oracle WE8ISO8859P1  characterset is used and subscriptor SQL server has collation Latin1_General_CI_AS
  5. Suboptimal execution plans could pop up on the queries generated by the SQL Server log reader on Oracle : in case a large number of rows are changed/inserted/deleted in a short period of time the log reader could become very slow … this is because the statistics on the queue tables (owned by the SQL Server replication administrator user) becomes suddenly stale… in order to fix I had to compute statistics on the tables of the SQL Server replication administrator user.

After some time of trial stage and fine tuning the replica communication is now running with satisfactory performance (a round trip can take less than 10 seconds but the throughput is much higher than the previous communication system) and it needs few or no maintenance at all.

I tought I was … perfect migration (from Ora9i) in test system with the good old export/import , all tests ok,  perfect migration in production system.. . ready, set, GO .

And then I started to notice red mountains on the Active-sessions chart … locks … lots of them … customer calling and slowly slipping into panic while I am trying to understand what the f**k is going on.

As printed on HGTTG  (and on a print behind my desk) DON’T PANIC .


  • sessions locking themselves on tables they don’t interact directly
  • sessions locked waiting on event “enq: TM – contention”
  • the only link between the locked instruction and the locking object is a network of few triggers and several foreign keys constraints (yes, sometimes programmers use them)

And here I have to thank :

And to link the following pages that led me to a solution: (Actually I found this 1st : )

The moral of the story is :


Here is my easy review of  Tom’s script to create the missing indexes for the connected user (this will generate the commands as output then use them as you like changing the names of  indexes, tablespaces etc. ) :

select 'CREATE  INDEX  I_FK'||rn||'_'||table_name||' ON '||
table_name||' ('||ccc||') TABLESPACE MY_INDEX_TBS;' CMD
select decode( b.table_name, NULL, '****', 'ok' ) Status,
    , a.columns ccc
    , b.columns
    , row_number() over (partition by a.table_name order by a.columns) rn
( select substr(a.table_name,1,30) table_name,
   substr(a.constraint_name,1,30) constraint_name,
      max(decode(position, 1,     substr(column_name,1,30),NULL)) ||
      max(decode(position, 2,', '||substr(column_name,1,30),NULL)) ||
      max(decode(position, 3,', '||substr(column_name,1,30),NULL)) ||
      max(decode(position, 4,', '||substr(column_name,1,30),NULL)) ||
      max(decode(position, 5,', '||substr(column_name,1,30),NULL)) ||
      max(decode(position, 6,', '||substr(column_name,1,30),NULL)) ||
      max(decode(position, 7,', '||substr(column_name,1,30),NULL)) ||
      max(decode(position, 8,', '||substr(column_name,1,30),NULL)) ||
      max(decode(position, 9,', '||substr(column_name,1,30),NULL)) ||
      max(decode(position,10,', '||substr(column_name,1,30),NULL)) ||
      max(decode(position,11,', '||substr(column_name,1,30),NULL)) ||
      max(decode(position,12,', '||substr(column_name,1,30),NULL)) ||
      max(decode(position,13,', '||substr(column_name,1,30),NULL)) ||
      max(decode(position,14,', '||substr(column_name,1,30),NULL)) ||
      max(decode(position,15,', '||substr(column_name,1,30),NULL)) ||
      max(decode(position,16,', '||substr(column_name,1,30),NULL)) columns
    from user_cons_columns a, user_constraints b
   where a.constraint_name = b.constraint_name
     and b.constraint_type = 'R'
   group by substr(a.table_name,1,30), substr(a.constraint_name,1,30) ) a,
( select substr(table_name,1,30) table_name, substr(index_name,1,30) index_name,
      max(decode(column_position, 1,     substr(column_name,1,30),NULL)) ||
      max(decode(column_position, 2,', '||substr(column_name,1,30),NULL)) ||
      max(decode(column_position, 3,', '||substr(column_name,1,30),NULL)) ||
      max(decode(column_position, 4,', '||substr(column_name,1,30),NULL)) ||
      max(decode(column_position, 5,', '||substr(column_name,1,30),NULL)) ||
      max(decode(column_position, 6,', '||substr(column_name,1,30),NULL)) ||
      max(decode(column_position, 7,', '||substr(column_name,1,30),NULL)) ||
      max(decode(column_position, 8,', '||substr(column_name,1,30),NULL)) ||
      max(decode(column_position, 9,', '||substr(column_name,1,30),NULL)) ||
      max(decode(column_position,10,', '||substr(column_name,1,30),NULL)) ||
      max(decode(column_position,11,', '||substr(column_name,1,30),NULL)) ||
      max(decode(column_position,12,', '||substr(column_name,1,30),NULL)) ||
      max(decode(column_position,13,', '||substr(column_name,1,30),NULL)) ||
      max(decode(column_position,14,', '||substr(column_name,1,30),NULL)) ||
      max(decode(column_position,15,', '||substr(column_name,1,30),NULL)) ||
      max(decode(column_position,16,', '||substr(column_name,1,30),NULL)) columns
    from user_ind_columns
   group by substr(table_name,1,30), substr(index_name,1,30) ) b
where a.table_name = b.table_name (+)
  and b.columns (+) like a.columns || '%'
  where status != 'ok'

Of course it worked for me but there is no warranty that your server will not explode when you use this query or the commands generated 😉

Let’s imagine you are a wise DBA , and you have a brand new shiny SQL Server 2008 R2 DB on a Win server 2008 R2 , 64 bit OS platform … and for some INSANE reason you have to move A LOT ( we talk about some millions of rows ) of data from an Oracle server to this DB … what would you do ?

Before today my answer would have been “Install the appropriate (64bit) Oracle client with  OLE Db provider for Oracle and use it as data source in the MSSQL Import (AKA DTS)” …

But, of course, the fight between Oracle and Microsoft will not make things so easy … so this solution will not work : Microsoft OleDB provider for Oracle will return error when initializing and Oracle OLE DB Provider for Oracle will not show up in the list of the data sources; so you will not be able to use DTS …  unless… here comes the magic: you install ALSO the  32bit Oracle client !

I found this solution in an article from Jorg Klein where he suggested this   for a similar problem.

I also adjusted the registry keys  as indicated in the article … and right now my DTS import is copying rows at decent speed between the two bitter enemies RDBMS ‘s.