Creating an Oracle Installation Inventory Report

One of the challenges when taking a new position as an Oracle DBA is getting a good survey of the landscape for which you are responsible. Especially in smaller shops with only one Oracle DBA there is often a dearth of reliable documentation – or any documentation at all! An initial look at a server may reveal multiple ORACLE_HOMEs, multiple databases in the same or multiple homes, separate client-only homes … you name it. Anything you can imagine, and perhaps even some things you would never imagine, all done by your predecessor.

Faced with such a situation, I needed to discover the following:

1. What ORACLE_HOMEs had been created
2. What software and version was installed in each ORACLE_HOME
3. Which ORACLE_HOMEs were being used
4. Which ORACLE_HOMEs were not being used and could be safely removed.

Given all the various ways a, um, “creative” DBA can “go off the reservation” I’m not sure there is a 100% foolproof way to discover all the possible installations and relationships. However, if one starts with the assumption that the previous DBA was not deliberately devious but merely inconsistent in placing and naming his ORACLE_HOMEs, I believe that by combining information in the central inventory file with the contents of the oratab file, one can produce a reliable survey. After several partial solutions the light finally came on and I wrote the script presented here.

The end result will be a report that looks like this:

=====================================================
HOST: vblnxsrv02.localdomain
Report time: Fri Feb 26 16:23:26 CST 2016

HOME NAME: OraDB12Home1 IDX: 1

------------------------- oratab usage --------------------------
tulsa:/u01/app/oracle/product/12.1.0/dbhome_1:N
oklacity:/u01/app/oracle/product/12.1.0/dbhome_1:N
-----------------------------------------------------------------
ownership of /u01/app/oracle/product/12.1.0/dbhome_1 is
drwxr-xr-x. 70 oracle oinstall 4096 Feb 22 00:28 dbhome_1
-----------------------------------------------------------------
Oracle Interim Patch Installer version 12.1.0.1.3
Copyright (c) 2016, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/app/oracle/product/12.1.0/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0/dbhome_1/oraInst.loc
OPatch version    : 12.1.0.1.3
OUI version       : 12.1.0.2.0
Log file location : /u01/app/oracle/product/12.1.0/dbhome_1/cfgtoollogs/opatch/opatch2016-02-26_16-23-29PM_1.log

Lsinventory Output file location : /u01/app/oracle/product/12.1.0/dbhome_1/cfgtoollogs/opatch/lsinv/lsinventory2016-02-26_16-23-29PM.txt

--------------------------------------------------------------------------------
Installed Top-level Products (1):

Oracle Database 12c                                                  12.1.0.2.0
There are 1 products installed in this Oracle Home.


There are no Interim patches installed in this Oracle Home.


--------------------------------------------------------------------------------

OPatch succeeded.

Listing 1.

Line 2 gives the name of the host server being reported on.

Line 3 gives the time the report was created

Line 5 gives the reference name of the ORACLE_HOME, as recorded in the central inventory.

Lines 8-9 show the oratab entries that reference this ORACLE_HOME.

Line 12 shows the Linux ownership profile of this ORACLE_HOME.

Line 14 begins an opatch lsinventory report on this ORACLE_HOME.

If the host has multiple ORACLE_HOMEs, the above would be repeated for each home.

In keeping with the core “mission” of this blog, I don’t want to just give you code and say “here, use this”.  Rather I want to educate on how and why things work the way they do. So in this article I will detail how I wrote the shell script that generates this report and explain its inner workings.  Hopefully you will learn some things about shell scripting and about how the Oracle inventory is put together.

Assumptions and Methodology

In developing this script, I assumed a global inventory which is identified by the file /etc/oraInst.loc. (I believe that on Solaris, this would be /var/opt/oraInst.loc). This file serves to point to the actual Oracle inventory location, which will enumerate all the installed ORACLE_HOME directories. I also assumed that the oratab file (/etc/oratab on most *nix systems, /var/opt/oratab on Solaris) enumerates all of the installed databases and ASM homes. I am fully aware that there is nothing within Oracle that forces the oratab file to be devoid of either sins of commission or sins of omission. That is, it is quite possible for oratab to NOT list an existing, running database. It is equally possible for it to have entries for databases that do not exist at all.  By combining information found in the central inventory with information found in oratab, we are able to produce a report that gathers all all of the key information about every ORACLE_HOME on the server.

The process

Creating the Oracle Installation Inventory

Step One – Locate the Oracle Inventory

The first thing we need to do is locate the Oracle central inventory. You might think that it is always located in the same place, and most of the time you might be correct. But we can do better than that. The file /etc/oraInst.loc contains the pointer to the inventory, wherever it may have been placed. The oraInst.loc file itself looks like this:

oracle:tulsa$ cat /etc/oraInst.loc
inventory_loc=/u01/app/oraInventory
inst_group=oinstall

Listing 2.

In reality, the inventory is actually a set of sub-directories and files that are located starting with the directory indicated by the entry “inventory_loc”. Starting from that point, the file “../ContentsXML/inventory.xml” contains the information we need.  On my test machine, that file looks like this:

<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2011, Oracle. All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
<SAVED_WITH>11.2.0.3.0</SAVED_WITH>
<MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="OraDb11g_home1" LOC="/u01/app/oracle/product/11.2.0/db_1" TYPE="O" IDX="1"/>
<HOME NAME="OraDb11g_home3" LOC="/u01/app/oracle/product/11.2.0/dbhome_2" TYPE="O" IDX="3"/>
<HOME NAME="OraDb11g_home2" LOC="/u01/app/oracle/product/11.2.0/dbhome_1" TYPE="O" IDX="2" REMOVED="T"/>
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>

Listing 3.

(The sharp-eyed reader may begin to notice some discrepancies between the inventory shown in Listing 3 and that used in the rest of the article.  This was the inventory I worked with when I first wrote this presentation, but was not available to me on an existing system when editing for this article.  I include it here because it presents some of the complexities that could exist and had to be accounted for.)

Since we know that the file “oraInst.loc” contains a pointer to the directory containing the Oracle installation inventory, and we know the name of the actual inventory file within that designated location, we can combine the use of ‘grep’ and ‘cut’ to extract the location from oraInst.loc and assign it to an environment variable. That variable can then be used as a fully qualified name to the specific inventory file we need.  Armed with this knowledge, let’s start building our script

#!/bin/sh
runlog=$(basename $0)_$(date +%Y%m%d_%H%M).log
orainstloc=$(grep inventory_loc /etc/oraInst.loc | \
cut -d "=" -f2)/ContentsXML/inventory.xml

Listing 4.

Line 1 simply directs that the script will be processed by the shell command processor ‘sh’. This will override any other settings that might have caused it to be processed by other shells, such as bash, ksh, etc.  I put it at the beginning of all of my scripts so that I can be sure which shell will be used, and thus know which variant of the shell scripting language to use.

Line 2 establishes the name of a log file and assigns it to a variable to be used later. The value is constructed of three components.  The assignment ‘$(basename $0)’ returns the name of the script itself.  The assignment ‘$(date +%Y%m%d_%H%M) returns the current date and time in yyyymmdd_hhmi format. The third component is the literal ‘.log’. Again, this is a line I use in all of my shell scripts to generate a name for a log file without having to hard-code it for every script.

Lines 3 and 4 are actually a multi-step process that we need to deconstruct. While we show two lines of code this is actually one command, continued across two lines simply to make it easier to read and understand.  The backslash at the end of line 3 is a continuation character, telling the shell processor that the command is continued on the next line.

Notice that, overall, the command is assigning some value to the environment variable “orainstloc”. Notice further that what is being assigned is actually two strings concatenated together. The first string is whatever is returned by the commands enclosed in the ‘$()’ assignment: $(grep inventory_loc /etc/oraInst.loc |cut -d “=” -f2) and the second is the literal ‘/ContentsXML/inventory.xml’.

Digging further, the command returning the first part of the string, enclosed by $(), is itself actually using two commands to return a value. The ‘grep’ command reads the file /etc/oraInst.loc and returns the entire entire line that contains the string ‘inventory_loc’.  As we saw in Listing 2, line 2, that is the string:

inventory_loc=/u01/app/oraInventory

That string is then piped to the ‘cut’ command, where we define the equal sign as a field delimiter, (-d “=”) and extract the second field (-f2). That will return the value

/u01/app/oraInventory

If we were to execute just the above statement at a command prompt then show the value assigned we see this (note that the “>” at the beginning of line 2 is the shell continuation prompt, not part of the command itself):

oracle:tulsa$ orainstloc=$(grep inventory_loc /etc/oraInst.loc | \
> cut -d "=" -f2)/ContentsXML/inventory.xml

oracle:tulsa$ echo $orainstloc
/u01/app/oraInventory/ContentsXML/inventory.xml

Listing 5.

So at this point, we have determined the fully qualified name of the inventory file and assigned that to an environment variable, “orainstloc”. The only hard-coded assumption is the location of the oraInst.loc file.

Step Two – Parse the Oracle Inventory

Now that we have determined the location of the inventory file, we need to extract from it the locations of every ORACLE_HOME.

We are only interested in information from the lines in the <HOME_LIST> section. Notice in Listing 3 lines 10-12, that there are a total of three separate ORACLE_HOMEs listed.  Further, if you scroll to the right you will see that one of them is tagged as having been removed (REMOVED=”T”). We can get the information we need by using grep to select only the lines that begin with ‘HOME_NAME’, and from that list remove the lines that include ‘REMOVED’. We accomplish that with the next line we add to our script:

#!/bin/sh
runlog=$(basename $0)_$(date +%Y%m%d_%H%M).log
orainstloc=$(grep inventory_loc /etc/oraInst.loc | \
cut -d "=" -f2)/ContentsXML/inventory.xml
grep "HOME NAME=" $orainstloc |grep -v "REMOVED" 

Listing 6.

At line 5, ‘grep’ is used to read the file indicated by the environment variable ‘orainstloc’ (which we had assigned in the previous command) and extract only those lines that contain the string ‘HOME NAME=’. That list is piped to a second grep with the ‘-v’ switch, which tells grep to return every line that does not contain the specified string, ‘REMOVED’.

After completing the above step the final ‘grep returns the following

<HOME NAME="OraDb11g_home1" LOC="/u01/app/oracle/product/11.2.0/db_1" TYPE="O" IDX="1"/>
<HOME NAME="OraDb11g_home3" LOC="/u01/app/oracle/product/11.2.0/dbhome_2" TYPE="O" IDX="3"/>

Listing 7.

Notice that I did not actually capture that output by redirecting it to a file.  Instead, I am going to just pipe that output to the next step.

Step 3 – Create a Script to Write the Report

Now that we have captured the names and locations of all of the installed ORACLE_HOMEs, we can leverage the power of the ‘awk’ utility to use that information as the basis for creating a second script that will actually create our report on each ORACLE_HOME.

This step is rather lengthy, but it is worth following in detail.

Awk searches one or more files (or other input stream) for records that match the specified patterns. When a match is found it performs specified actions using the matching records. In our usage here, we provide no matching pattern, so it processes every record in the input stream.  To do that, my script now looks like this:

#!/bin/sh
runlog=$(basename $0)_$(date +%Y%m%d_%H%M).log
orainstloc=$(grep inventory_loc /etc/oraInst.loc | \
cut -d "=" -f2)/ContentsXML/inventory.xml
grep "HOME NAME=" $orainstloc |grep -v "REMOVED" | \
awk -F\" '{print \
  "echo ==================================================" RS \
  "echo HOST: $(hostname)" RS \
  "echo Report time: $(date)" RS \
  "echo " RS \
  "echo HOME NAME: " $2 "   IDX: " $8 RS \
  "echo "  RS \
  "export ORACLE_HOME=" $4 RS \
  "echo ------------------ oratab usage ------------------" RS \
  "grep $ORACLE_HOME: /etc/oratab" RS \
  "echo --------------------------------------------------" RS \
  "echo ownership of " $4 " is " RS \
  "export mybase=$(basename $ORACLE_HOME)" RS \
  "ls -l " $4"/.. |grep -v total|grep $mybase" RS \
  "echo --------------------------------------------------" RS \
  "$ORACLE_HOME/OPatch/opatch lsinventory"}' 
echo Report file is at $runlog
exit

Listing 8.

It is very important to keep in mind that here we are using a script to write a script. We must pay close attention to what commands are used to create the second script, and what commands are becoming that second script, to be executed at a later step.

On line 5 I have added a pipe to accept the output of the grep commands, and a line continuation.  Using the pipe, the output of the final ‘grep’ on line 5 becomes the input stream for the ‘awk’ command on line 6. With the line continuations, lines 5 through 20 become one lengthy, complex command.

The back-slash at the end of each line is a continuation mark, to tell the shell processor that this command is continued on the following line. This separation of lines is only for our own sanity in reading the awk script. Awk itself never sees these breaks.

awk -F\" '{print \

Line 6 specifies the ‘awk’ command itself. The -F switch indicates that we are providing a character to be used as a field delimiter when processing the records of the input stream. In our case we want the field delimiter to be a double-quote mark. Since the double-quote itself has special meaning to the shell processor, we have to ‘escape’ it with the backslash to tell the shell processor to treat it as any other character. The remainder of line 6 begins the actual command sequence to awk, telling it to print whatever follows.

Lines 7 through 21 indicate what awk is to print (write) to the output stream. Since we are using awk to create a secondary shell script, “what” we are printing is a series of shell commands. For convenience, I have made a separate line in my awk script for each line of the output stream. In order to actually create separate lines (separate commands) in the script being created, each ends with the awk built-in variable RS. This indicates to include a record separator in the output stream, so whatever follows the RS will be on a new line in the output. Scanning lines 7 through 21 you can easily see the script that we are building.

My detail description of lines 7 through 21 will address that line in the context of its role in the script being created by awk.

  "echo ==================================================" RS \

Line 7 will ‘echo’ a visual break before each section in the final report. A ‘section’ is all of the information being reported on a given ORACLE_HOME, as listed in the inventory.

  "echo HOST: $(hostname)" RS \

Line 8 will report the value of whatever is returned by the command ‘hostname’.

  "echo Report time: $(date)" RS \

Line 9 will write a command to report the current date and time

  "echo " RS \

Line 10 will echo a blank line.

  "echo HOME NAME: " $2 "   IDX: " $8 RS \

In Line 11 awk will include the values of fields 2 and 8 (of the input record that was piped from grep on line 5) when it writes the ‘echo’ command. A close examination of the input file shows field 2 to be the name of the ORACLE_HOME being processed. Note that this is just a reference name, not the actual ORACLE_HOME directory specification. Field 8 is the index for this ORACLE_HOME within the inventory.

  "echo "  RS \

Line 12 will echo a blank line in the final report.

  "export ORACLE_HOME=" $4 RS \

In line 13 awk will include the value of field 4 from our input record when it writes the ‘export’ command. A close examination of the input file shows field 4 to be the actual ORACLE_HOME directory specification.

  "echo ------------------ oratab usage ------------------" RS \
  "grep $ORACLE_HOME: /etc/oratab" RS \
  "echo --------------------------------------------------" RS \

Lines 14 and 16 write some visual separators to enclose the output of line 15.

Line 15 writes a ‘grep’ command, to print the lines from /etc/oratab that relate to the ORACLE_HOME being processed.

  "echo ownership of " $4 " is " RS \

Line 17 again uses field 4 of our input record, along with the string literal ‘ownership of ‘.

  "export mybase=$(basename $ORACLE_HOME)" RS \

Line 18 sets an environment variable, ‘mybase’ to the value of the ORACLE_HOME directory, stripped of its parent directories.  This is then used in the next line of the script being written.

  "ls -l " $4"/.. |grep -v total|grep $mybase" RS \

Line 19 produces a listing of the directory that is the immediate parent of $ORACLE_HOME, stripped of all the directories except $ORACLE_HOME.  The final result shows the access profile of $ORACLE_HOME.

  "echo --------------------------------------------------" RS \

Line 20 produces another visual break in the final report.

  "$ORACLE_HOME/OPatch/opatch lsinventory"}' 

Line 21 executes the oracle ‘lsinventory’ utility to report the full inventory information for the ORACLE_HOME being processed.

So at this point our script has dynamically created created a set of shell commands that themselves will create our report.  If we were to capture those commands in a file (which  we did not do), the file would look something like this, for an installation with a single ORACLE_HOME:

 
echo =====================================================
echo HOST: $(hostname)
echo Report time: $(date)
echo
echo HOME NAME: OraDB12Home1   IDX: 1
echo
export ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1
echo ------------------------- oratab usage --------------------------
grep $ORACLE_HOME: /etc/oratab
echo -----------------------------------------------------------------
echo ownership of /u01/app/oracle/product/12.1.0/dbhome_1 is
export mybase=$(basename $ORACLE_HOME)
ls -l /u01/app/oracle/product/12.1.0/dbhome_1/.. |grep -v total|grep $mybase
echo -----------------------------------------------------------------
$ORACLE_HOME/OPatch/opatch lsinventory

Listing 9.

Step 4 – Execute the Commands to Write the Report

In Listing 8 I showed how to create use awk to create a set of commands, using informatin found in the records that we extracted from the inventory file.  As mentioned, we did not actually write those generated commands to a file and did nothing to execute them.  In an earlier edition of this script, I redirected the output of awk to a file, then executed that file.  That presented a few other problems for a later enhancement, so I replaced the redirection to a file with a pipe to ‘sh’, the shell command processor. With that, along with a couple of other changes that went along with it, my complete script looks like this:

#!/bin/sh
runlog=$(basename $0)_$(date +%Y%m%d_%H%M).log
orainstloc=$(grep inventory_loc /etc/oraInst.loc | \
cut -d "=" -f2)/ContentsXML/inventory.xml
grep "HOME NAME=" $orainstloc |grep -v "REMOVED" | \
awk -F\" '{print \
  "echo ==================================================" RS \
  "echo HOST: $(hostname)" RS \
  "echo Report time: $(date)" RS \
  "echo " RS \
  "echo HOME NAME: " $2 "   IDX: " $8 RS \
  "echo "  RS \
  "export ORACLE_HOME=" $4 RS \
  "echo ------------------ oratab usage ------------------" RS \
  "grep $ORACLE_HOME: /etc/oratab" RS \
  "echo --------------------------------------------------" RS \
  "echo ownership of " $4 " is " RS \
  "export mybase=$(basename $ORACLE_HOME)" RS \
  "ls -l " $4"/.. |grep -v total|grep $mybase" RS \
  "echo --------------------------------------------------" RS \
  "$ORACLE_HOME/OPatch/opatch lsinventory"}' | sh |tee -a $runlog
echo Report file is at $runlog
exit

Listing 10.

At line 21 I have added a pipe to send the output of the awk command (that began on line 6) to the command processor ‘sh’.  This allows us to create the command script without actually writing it to a file.  I have also added a pipe to the ‘tee’ command so that as the script is processed by sh, the output goes both to stdout (the screen) and to a file. The name of the file is indicated by the value of the variable ‘runlog’, which we had set on line 2.

On line 22, I simply report the name of the log file that was created on line 21.

Conclusion

I trust others might find this script useful. I’m quite certain that those more fluent in shell scripting than I (particularly in the use of awk) will find some of my coding to be rather primitive and can offer a more elegant solution based on this work. I would be more than happy to see any substantive improvements.

I am going to start working on a functional equivalent for Windows installations.  Given the limitations of the Windows command processor and the complexity of the Windows registry, it will be rather more challenging.

If you have any observations or questions, please feel free to add a comment.

Please like us:

3 Comments


  1. // Reply

    Very nice script, tested it on AIX and works perfectly well. Thank you for sharing.
    Foued


    1. // Reply

      I’m glad you found it useful.


  2. // Reply

    Thanks for sharing, Ed. I can see a lot of thought went into this useful set of scripts. Every DBA eventually has to dive into somebody else’s configuration and try to figure out where the important bits are.

    It is kinda like being an explorer – you will never know if the previous DBA was a perfect model of organization and clarity or completely incompetent.

Leave a Reply

Your email address will not be published. Required fields are marked *