Tuesday, August 28, 2012

Oracle: OFA Part 2

Dear Readers,

My name is Franz Devantier, creator of this blog.  I am an Oracle Certified Professional (OCP DBA 11g) Security DBA.  I will be sharing with you the basic duties of an Oracle DBA, and also some of the undocumented, and not so well known tasks. 

OFA – Optimal Flexible Architecture
(Part 2, Operating System Configuration)

OFA – Note 1
Name the mount points in such a way, that you get an idea of the contents by looking at the name.
For example:  /AB
A -: Site-Specific identifier
B -: Content-Specific identifier

On Salaris /etc/vfstab;  shows the mapping of mount point names to device names.  Each row in the vfstab represents the mapping of the mount point to a device name/disk slice. 

On windows it will be something like Drive’s C:  D:  E:  F:  G:

If two or more applications live together in the same mount point or drive, obviously not in the same directory, but a sub-tree of the mount point/drive directory.  Then the base directory, should not name one application and not the other.

For example if the sub-tree directories contained <datafiles> on one branch and <datapump> on another branch, then the name of the root directory should describe as far as possible, both the sub-trees content, and not just one to the other, or a neutral alternative.
“/oracle/datafiles” would give the impression that there were only oracle files involved; and “/backups/datapump” would create the impression that there were only backup files in this structure.

So you would need a neutral base directory name like /u01 or /u02.  “/u02/oracle/datafiles”  and “/u02/backups/datapump”.  Now there is no need to become confused, and make an expensive mistake.

Storing files from two or more databases, on the same drive; compromises the best practice of avoiding performance bottlenecks.  The best practice of minimizing the impact of disk failure is also compromised, because now you stand to lose more than one database, if you lose one disk.
To minimize performance bottlenecks, with multiple databases on one computer, you will need to install additional hard drives.  To minimize the impact from disk failure, you will also need to install additional hard drives.  Without additional resources, your database environment is compromised.  The best thing to do in such a case is to prepare a document, describing the level of risk that the business is taking with their database installation.  If the business decides not to provide the needed hardware resources, then you must keep your email correspondence for when the disk fails; so that you can prove, that you did warn them about the possibility.

Remember that mount points abstract the underlying hardware, as don’t name them in such a way as to mirror the underlying hardware.

Login Home Directories
On the older UNIX computers, home directories were placed in “/usr”.  The problem with this arrangement, is that the entire “/usr” sub-tree structure may be replaced during upgrades.

A better login home directory would be “/u” or “/home”.  Properly sizing of file systems or drives will make things run a lot smoother in the long term operation of a database installation.

In Oracle apps (ebiz), the content of the database and application has grown bigger and bigger in time.  It is even more important to properly size the hardware, before we start installing Oracle.  With VLDB installations where the databases are in excess of a few Terabytes, and moving up towards a few hundred Terabytes; it becomes even more important to properly design the Oracle Database  installation.

Best practise requires that it is possible to exchange hardware components, without having to rebuild the applications that refer to these hardware components.

OFA Standard 5; requires that it is possible to distribute the collection of home directories as well as the contents of an individual home directory, across two or more disk drives.

Franz Devantier,
Need a database health check?

Income stabilizer (win-win opportunity)
Join the carefully selected and tested cash-flow generating program below to potentially create a long-term residual or annuity type income enhancer for yourself.

Traffic Wave - Free Report:  The report will give you all the information you need to start making a nice long-term residual income stream for yourself.


Sunday, August 26, 2012

Oracle: OFA (Optimal Flexible Architecture) Part 1

Dear Readers,

My name is Franz Devantier, creator of this blog.  I am an Oracle Certified Professional (OCP DBA 11g) Security DBA.  I will be sharing with you the basic duties of an Oracle DBA, and also some of the undocumented, and not so well known tasks. 

OFA – Optimal Flexible Architecture
(Part 1, Operating System Configuration)

The OFA Standard is a set of configuration guidelines that will give you faster, more reliable Oracle databases that require less work to maintain.  The “Optimal Flexible Architecture’’ described in the OFA Standard is built into the Oracle configuration tools and documentation on all open systems ports.  If a system is set up using OFA, then you will understand the layout, regardless of which configuration is used.  You will for example, immediately know where the alert log is, and the network configuration files etc, without having to search through the directories for the files.  If you have ever worked on a site, where the OFA standard was not implemented, then you would have found that you needed to do a lot of extra work, just to find where the various components of the Oracle implementation were.

Without OFA, these are the issues that you can expect: 
  • Routine administrative tasks like software and data backup are performed incorrectly or inefficiently, which could result in severe database and business losses.
  • Having multiple databases on one physical server can be a nightmare, and even result in corruption of production data.
  • Difficulty in administering database segment growth can result in repeated application failure, until you get the configuration into a workable condition.
  • Sub-Optimal database performance.

Therefore an entire suite of problems can be avoided by applying tried and tested knowledge of how to optimize Oracle’s relationship with its host operating system.  Oracle does most of this automatically for you, from Oracle 7 onwards, with the configuration and installation tools.  However you still have the opportunity to over-ride this set of best practices, if you are required to do so.

If you are installing an Oracle Database, then you are probably going to want to install the maximum sized database and application that your specific computer can handle. 

If you are only intending to use a fraction of your computer resources for the database, then you should consider “Oracle VM”.   With Oracle VM, you can make optimal use of your existing architecture.  That is to say, instead of buying additional hardware, you can virtualize your existing architecture.  This way you can optimize the use of your current hardware.

Before you can install the Oracle database, you must connect and configure the peripheral  devises on the computer.  You must configure disk swap space, as well as your file systems or directories that you are going to use.  Plan your data storage space, and get your network up and running.  Check the installation manual, for the specific operating system.  Make sure you have implemented all the pre-requisites before you start installing Oracle on the Computer.  Windows typically has few pre-requisites, whereas, Unix/Linux may have substantial pre-requites.  So I will probably focus more on Unix than Windows.

Mount Points
You need to create names and potential sizes for your file system mount points.  In Windows, you would look at your drives, and decide on which drive to place which component.  You need to decide where to put your applications, data, Oracle executables, archive log files, and control files, amongst other things.

  1. Standard 1 - Sufficient Space:  There must be sufficient space on the file-system’s or disk drives, for projected database growth.  You must determine, how the configuration will be affected, if you add additional hardware like disk drives.
  2. Standard 2 - Performance Bottlenecks:  The aim is to minimize performance bottlenecks.  To do this you need to be able to distribute the I/O load over as many physical disk drives as possible.  If you are using a SAN, or ASM(Automatic Storage Management), RAID devices, or volume managers, then your level of control over this will be limited.
  3. Standard 3 - Minimize Hardware Costs:  You can minimize hardware costs, by right-sizing you hardware devices, or if you already have available hardware, you can consider virtualization.  Oracle VM, in my opinion is the best way to go.  Oracle VM is basically improved and re-packaged XEN hypervisor, VM’s, and Oracle Linux or Oracle Solaris for the Operating system.  Another popular choice for virtualization is ESX hypervisor.  Personally I would not implement the windows HyperV, as a virtual platform, at this point in time.  However as the new products come out, it would be good to keep re-evaluating them.
  4. Standard 4 - Minimize the impact of Disk Failure:  You should have redo log files, and control files, duplexed on separate file systems or physical disk drives.  Keeping your backups on different physical devices to your live data-files is essential. 

Balance the cost of having performance bottlenecks, against the cost of minimizing the hardware costs. 

Franz Devantier,
Need a database health check?

Income stabilizer (win-win opportunity)
Join the carefully selected and tested cash-flow generating program below to potentially create a long-term residual or annuity type income enhancer for yourself.

Traffic Wave - Free Report:  The report will give you all the information you need to start making a nice long-term residual income stream for yourself.


Friday, August 24, 2012

Oracle: PL/SQL

Dear Readers,

My name is Franz Devantier, creator of this blog.  I am an Oracle Certified Professional (OCP DBA 11g) Security DBA.  I will be sharing with you the basic duties of an Oracle DBA, and also some of the undocumented, and not so well known tasks. 

I will make a deal with you:  If you refer me to a company that needs database support, from a few hours per week to full time, and I am able to sign a contract with them.
Then I will give you 10% of the monthly contract or deal price every month.  When the contract ends, and we re-sign the contract, I will again give you 10% of the monthly contract price.  This will go on until the company no longer employs or contracts me or my agents to look after their databases.
I can do this, because that 10% is my marketing budget.  When we re-sign the contract, in the future, it may depend on you giving the thumbs up again, and that is worth 10% of the monthly contract price, to be given to you as commission.
Contact: Franz

PL/SQL – Programming Language SQL
(Introduction)

PL/SQL is tightly integrated with SQL.  SQL is the most widely used database manipulation language.
  • PL/SQL allows you to use all of the SQL data manipulation capabilities.  It allows you to manipulate transactions and cursors.  You can access all the SQL functions and operators.  You also have access to all the pseudo columns like ‘sysdate’.
  • Most importantly PL/SQL supports all of the SQL data types, like ‘NUMBER’, ‘VARCHAR2’; as well as its own data types, like %TYPE, %ROWTYPE etc.
  • One outstanding feature of PL/SQL, is that it lets you run a SQL query, in a cursor; and then process the rows of that cursor one by one in a loop.

PL/SQL can potentially be written to perform well, in high performance environments.
  • You can send a block of statements to the database, which will reduce traffic between the database and the application.
  • You can use bind variables, which allows Oracle to re-use the execution paths of SQL statements, which can substantially improve performance.  Without the bind variables, Oracle would need to repeatedly re-parse the SQL statements.
  • Sub-Programs, are compiled once, and can be executed repeatedly.  Also sub-program compiled code can be shared between multiple users at runtime.  This optimizes the performance, and reduces overhead, increasing performance substantially.
  • The Optimizer for the PL/SQL compiler, can re-arrange the compiled code for higher performance.
  • For optimum performance, you need to design your code for performance, using the high performance features of PL/SQL.

PL/SQL code in different constructs, such as procedures and triggers, is very much the same.  The code you will create in SQL*Developer, or Oracle*Forms, or a discreet procedure in SQL*Plus, or a database trigger, is very much the same; which helps to make you more productive.  The good news is that PL/SQL is exactly the same regardless of which version of Unix/Linux or Windows or other operating system you are running your Oracle database on.

Because PL/SQL stored procedures are stored on the database server, instead of in the client applications; they are easier to manage, and deliver a higher performance.

To me, one of the exciting things about PL/SQL, is that you can use it to generate web-pages directly from the database.  This means that you can implement an internet based application, using only PL/SQL; you will however need, PL/SQL gateway and PL/SQL Web Toolkit, to produce a high class application.

The Features of PL/SQL
  • You combine the power of SQL, with the power of a procedural language.
  • You can declare variables, cursors and constants.  You can control the flow of programs and sub-programs, and trap errors.
  • A PL/SQL program exists within a block.  The most important parts of the block are ‘BEGIN’ and ‘END’.  All the code is contained within this area.  Optionally you can start the block with a ‘DECLARE’ section, and have an ‘EXCEPTION’ section just before the ‘END’ keyword.  You can even start off by having a label, just before the top of the BLOCK.


For Example:
<Label>  - optional
DECLARE – optional
   Local types, variables, subprograms and cursors are defined here.

BEGIN – required, this is where the executable part of the PL/SQL starts
   Statements

EXCEPTION – optional, this is where your error handing is done.

END; - required, this denotes the end of the block.

  • A sub-program is a named PL/SQL block.  It can be either a ‘FUNCTION’, which must return a value, or a ‘PROCEDURE’, which can optionally return a value.
  • A Package is a collection of sub-programs, that can be invoked from the package.  To invoke a specific sub-program within a package, you will call it like this:  <package_name>.<sub-program_name>(parameters)
  • A trigger is a named PL/SQL block, that is invoked, in response to a specific event in the database.
  • To view direct output from the PL/SQL program, you can use amongst other methods, the DBMS_OUTPUT package to view.
  • There are three categories of control statements in PL/SQL.  These three constructs, enable a fully functional programming language, when combined with SQL.
    • Conditional statements
    • Loop statements
    • Sequential control statements

Franz Devantier,
Need a database health check?

Income stabilizer (win-win opportunity)
Join the carefully selected and tested cash-flow generating program below to potentially create a long-term residual or annuity type income enhancer for yourself.

Traffic Wave - Free Report:  The report will give you all the information you need to start making a nice long-term residual income stream for yourself.


Wednesday, August 22, 2012

Oracle: Buffer Cache

Dear Readers,

My name is Franz Devantier, creator of this blog.  I am an Oracle Certified Professional (OCP DBA 11g) Security DBA.  I will be sharing with you the basic duties of an Oracle DBA, and also some of the undocumented, and not so well known tasks. 

I will make a deal with you:  If you refer me to a company that needs database support, from a few hours per week to full time, and I am able to sign a contract with them.
Then I will give you 10% of the monthly contract or deal price every month.  When the contract ends, and we re-sign the contract, I will again give you 10% of the monthly contract price.  This will go on until the company no longer employs or contracts me or my agents to look after their databases.
I can do this, because that 10% is my marketing budget.  When we re-sign the contract, in the future, it may depend on you giving the thumbs up again, and that is worth 10% of the monthly contract price, to be given to you as commission.
Contact: Franz

Buffer Cache

The buffer cache is one of the most important areas in the SGA(System Global Area) memory area.  The buffer cache(part of the SGA) holds copies of data blocks, so that they can be accessed quicker by Oracle than by reading them from the disk.  The assumption here is that it is quicker to read a block that is already in memory, than to read that same block from a storage device, such as a disk.

So essentially; the Buffer Cache is designed to minimize Physical Disk I/O.  The database buffer cache is logically segmented into a number of sets.  Having multiple sets reduces contention on multiprocessor systems, which seems to make more and more sense; as chip technology continues to improve.

So the buffer cache consists of database blocks, that have been read into memory.  Actually these are copies of database blocks in memory.  If these blocks have not been changed or updated, then they are eventually aged out of memory, without being written back to the database blocks on disk. 

There is a special mechanism that Oracle uses to age the oldest and least used database blocks, out of memory, to make space for new database blocks to be read into memory.  You can think of this mechanism as a list of database blocks in memory.  The one side of the list is known as the MRU(Most Recently Used), and the other side of the list is known as the LRU(Least Recently Used).

Now let’s imagine that a block is read from disk and into the database buffer cache.  If the buffer cache is sparsely populated, as for example when the database has just started up, then the block will be registered on the MRU end of the list.  However if the buffer cache is full, then a block on the LRU side of the list must first be moved out of the buffer cache.  If this buffer has not been changed, then it can simply be discarded, however if it has been changed, then it must first be written back to disk.  It will not always be one block at a time, but could potentially be many blocks at a time.

The parameter Db_file_multiblock_read_count will typically be set to the maximum allowed by the operating system by default.  A value of 8 is used by Oracle for costing or internal calculations.  It is calculated like this:  db_file_multiblock_read_count = (maximum I/O chunk size) / (db_block size).  From 10GR2 onwards, Oracle recommends that you do not set this parameter, but allow Oracle to determine the optimal value for itself.  Actually this takes away the headache of trying to calculate the “Maximum I/O chunk size”.

Now here’s where it gets interesting, block’s read from disk during a full table scan are placed on the LRU side of the list, instead of the MRU side of the list.  This prevents scattered data-blocks that are frequently used from being aged out of the buffer cache, during a full table scan.  We will refer to the list as the LRU list.

The time that a block was last touched or accessed is also recorded.

Actually there are three buffer pools, within the buffer cache, which are used for different purposes.  Firstly there is the “Keep Pool”, which is designed for smaller objects, like small lookup tables.  Secondly there is the “Recycle Pool”, which is designed for larger objects.  Lastly most of the buffers are stored in the “Default Pool”.  Each pool has its own LRU list.  Each pool’s LRU list is divided into a hot area, and a cold area.  So buffers in the hot part of the LRU list are called hot buffers, and buffers in the cold part of the LRU list are called cold buffers.  50% of the buffers are in the Hot area, and 50% in the cold area of the LRU list by default.  Actually if you are using multiple block sizes in your database, then you would have an additional pool for each non-default block size.  For example your database may have by default 8KB block sizes, so you could also create tablespaces to store blocks of 2KB, 4 KB, 16KB and 32 KB.  There are overheads involved, with adding additional block sizes to your database, so there must be a reason for implementing it.

New single blocks are placed in the middle of the LRU list(as a hot buffer), this is also known as “Midpoint Insertion”.

Every buffer in the list has a touch count as well.  Mainly when a buffer is accessed, the touch count is increased.  So buffers with a higher touch count are more important, and tend to be kept in the buffer cache in preference to buffers with a low touch count.  Most of the operations on the LRU list are protected by latches, except for the Touch Count.  This means that the Touch Count may not always be updated, if another process updates the buffer header, while an attempt is made to update the touch count.

Finding the Optimal size for the Buffer Cache, involves many considerations.  Obviously measuring the buffer cache hit ratio is one consideration.  Another consideration is that the bigger the Buffer Cache gets the more overhead to manage the LRU list.  There is also another list to manage called the “Dirty List”, which consists of buffers that have been changed.  The larger the Buffer Cache becomes, the longer the search for free buffer becomes.  This is measured by another metric called the “Buffer Busy Waits”.  Also a bigger buffer cache gives more work to the database writer (DBWn) background processes.  So finding the optimal buffer cache size, can be a fine line between too small and too big.  If the buffer cache hit ratio is anywhere above 90%, then you have pretty much achieved the goal; provided there are no side effects, as a result of the sizing.

Look out for blog entry “Buffer Cache Part 2”, where we will look at other aspects of the Buffer Cache.

Franz Devantier,
Need a database health check?

Income stabilizer (win-win opportunity)
Join the carefully selected and tested cash-flow generating program below to potentially create a long-term residual or annuity type income enhancer for yourself.

Traffic Wave - Free Report:  The report will give you all the information you need to start making a nice long-term residual income stream for yourself.


Sunday, June 10, 2012

Oracle - Starting a Database

Dear Readers,

My name is Franz Devantier, creator of this blog.  I am an Oracle Certified Professional (OCP DBA 11g) Security DBA.  I will be sharing with you the basic duties of an Oracle DBA, and also some of the undocumented, and not so well known tasks. 

I will make a deal with you:  If you refer me to a company that needs database support, from a few hours per week to full time, and I am able to sign a contract with them.
Then I will give you 10% of the monthly contract or deal price every month.  When the contract ends, and we re-sign the contract, I will again give you 10% of the monthly contract price.  This will go on until the company no longer employs or contracts me or my agents to look after their databases.
I can do this, because that 10% is my marketing budget.  When we re-sign the contract, in the future, it may depend on you giving the thumbs up again, and that is worth 10% of the monthly contract price, to be given to you as commission.
Contact: Franz

Starting a database

To start up an Oracle database, you will firstly need to be properly authenticated, and logged in with a DBA user with appropriate privileges.  I will not go into that now, because it can get quite involved authenticating yourself correctly.  We will focus here on actually starting up.

Let’s say you are properly authenticated and logged into SQL*Plus.  This is not the only way to do it, but the preferred method to demonstrate.
SQL> startup

That is all that is needed, Oracle will find the pfile or I hope the spfile in the default location and use that to find the control files, and therefore all the components it needs to start up.  If you are still using the pfile, then we need to talk.

In you look in your alert<sid>.ora file, you will notice that Oracle in fact went through a few different states of the database, before it ended up in the “OPEN” state.
  •          Nomount – This is where Oracle reads the spfile or I hope not pfile.  The information in the parameter file is used to start the Oracle Instance.  The instance is the structure that Oracle creates in memory.  This is the instance state that is useful for recreating the control file, and other maintenance tasks.
  •          Mount – This is the state where the instance mounts the database.  The database is really the physical database files and the instance is the memory structures.  So this state is where the memory structures are mounted or connected to the physical database files.  The database files are not opened in this state of the database.  This means that even if there are corrupted datafiles, you will still not be aware of that in this state, because you have simply mounted the files, but the files have not been opened.  This is a good state to proceed from for database maintenance activities, like recovery for example.
  •          Open – Finally the database reaches the “Open” state.  The data files and the redo files that are mentioned in the control files are opened in this state.  If you do not have redo logs, you will need to issue “resetlogs”, to create them in the location specified in the control files.




If you want to startup from a specific pfile, or startup using a pfile, when your database is using a spfile, you can use this syntax.
SQL> startup pfile=’<fully qualified path/>initsid.ora’

Then you want to create your spfile from the pfile.

SQL> create spfile from pfile;

Now your database has started up from a pfile, so you will need to restart in the normal way to allow your database to use the spfile.

Before doing this you would have done something like this to make sure that you did not lose any existing parameters in your spfile.
SQL> create pfile from spfile;

Then you would have edited this pfile in $ORACLE_HOME/dbs or $ORACLE_HOME/database depending on whether you are in windows or unix/linux.

You can also manually control how the database opens, by specifying whether you want to startup in nomount, mount or open state.
SQL> startup nomount

SQL> startup mount

SQL> startup open or SQL> startup

The default state of open is “READ WRITE”, but you can also open the database in read only mode.  In this state no database changes such as inserts, deletes or updates can be executed.
SQL> startup mount

SQL> alter database open read only;

Then to change it to READ WRITE mode you can do a shutdown and a startup.

Then you can open the database in restricted mode as well.  This will only allow users with the restricted session privilege to connect to the database.  Typically only certain DBA’s will be granted this privilege.  This is useful during maintenance, to prevent other users from accessing the database during maintenance operations.
SQL> startup restrict

Another useful mode is the Quiesce mode.  In this mode only the sys and system users can query the database.  However users that are already active on the system can still access the database, until there session becomes inactive.  Very useful, but its application needs to be carefully implemented.
SQL>  alter system queisce restrict;
 SQL> alter system unqueisce;

Of course no discussion of startup options would be complete without mentioning the startup force option.  This option is reserved for those situations, in which you have a reliable backup, and for some reason the database is stuck half-way between being shutdown and open.  This option is a last resort, and so the alert log has been studied, and any other measures must have been taken, before using this option.  Startup force will do a “Shutdown abort”, followed by a normal startup.
SQL> startup force

Now we get to the really interesting startup option, the “resetlogs”.  If you have performed an incomplete recovery, and the database is shutdown you can do this.
SQL>  startup open resetlogs

More typically you will be in mounted mode, and have performed an incomplete recovery.
SQL> alter database open resetlogs;

Now it is important to be aware of what “open resetlogs” does in the background.  It goes to check if it has been invoked after an incomplete recovery.  If the database has been shutdown in normal mode without issues, then the resetlogs will not execute.
The above SQL statement will in fact return an error something like this:
ORA-01139: Resetlogs option only valid after an incomplete database recovery.

Now let’s say that we specifically want a new incarnation of the database, so that our RMAN backup will never look for archive log files, beyond the last resetlogs.  How do we achieve this?

Ok, so now we will have to simulate an incomplete recovery of the database without actually performing an incomplete recovery of the database.  Let’s imagine that this database is a production database, then we can’t take any chances with getting this process wrong.

More correctly we are going to create a scenario to enable “resetlogs”, to validly create a new incarnation of the database, without actually having or performing an incomplete recovery of the database.  Quite obviously this is not documented in the Oracle documentation.  

The obvious route to follow would incorrectly be:
SQL> recovery database;
Doing the above from the mounted mode will not work, because firstly it does not refer to incomplete recovery, and secondly there is nothing to recover, if the database has shutdown normally.
You could expect an error message something like this:
“no recovery required”
Also if you were to try the “alter database open resetlogs” again, you would get the same error as described before.

You need to do something that will get past the “resetlogs mechanism” without actually doing any recovery.  For example, from a mounted state database:
SQL> recover database until cancel;
This will almost immediately return a message something like this:
“Media recovery complete”
It comes back so quickly, because in fact there was nothing to do.  However from the “resetlogs” mechanism or point of view, an incomplete recovery has now been performed.

Now you can execute a resetlogs.
SQL> alter database open resetlog;
This should return after a while
“Database altered.”

So now we have successfully executed a resetlogs, without actually doing a recovery.
Oracle did not need to apply any redo in this operation.  The incarnation of the database was changed and the log sequence was reset to 1.  In other words this is a new incarnation of the database, and therefore we can keep backups of it from this point onwards.  More specifically backups or copies of the archived log files.  Oracle will now never ask us to apply an archived log file during recovery, that is older than the point in time of the new incarnation of the database.

From a backup and recovery perspective, this makes a lot of sense.  If the database was a few years old, the archived logs from a few years ago are long gone.  If during a recovery process, because of corrupted backups or whatever reason, we were asked for an archive log from the beginning, we would now only have to go back as far as the last incarnation of the datbase.  So our backup and recovery strategy has become more robust and reliable with this action.

No Redo actually needed to be applied yet, Oracle did do an OPEN RESETLOGS and change the activation ID and reset the Log Sequence Number to 1.

Here we can see that this was the 4th incarnation of this database :
SQL> select dbid, resetlogs_time, recovery_target_incarnation#, last_open_incarnation#
        2  from v$database;

      DBID RESETLOGS RECOVERY_TARGET_INCARNATION# LAST_OPEN_INCARNATION#
---------- --------- ---------------------------- ----------------------
4279484577 10-JUN-12                            4                      4

SQL> select incarnation#, resetlogs_change#, resetlogs_time, status, resetlogs_id
  2  from v$database_incarnation
  3  where incarnation#=4;

INCARNATION# RESETLOGS_CHANGE# RESETLOGS STATUS  RESETLOGS_ID
------------ ----------------- --------- ------- ------------
           4        5.9702E+12 10-JUN-12 CURRENT    785611115

We have successfully caused the “alter database open resetlogs” process to believe that we were sitting with an incomplete recovery, even although the last shutdown was a normal shutdown.  Also this process has not lost any transactions.

A bit of caution though.  Because this is undocumented by Oracle, it should only be attempted very carefully and after testing on a development or test system.

I have by no means exhausted the possibilities of starting up an Oracle database in this article, but hopefully, the undocumented feature will prove to be useful or interesting.  Hopefully in version 12 Oracle would have addressed this issue.


Franz Devantier,
Need a database health check?
devantierf@gmail.com


Income stabilizer (win-win opportunity)
Join the carefully selected and tested cash-flow generating program below to potentially create a long-term residual or annuity type income enhancer for yourself.

Traffic Wave - Free Report:  The report will give you all the information you need to start making a nice long-term residual income stream for yourself.