Thursday, August 30, 2012

Oracle: OFA Part 3

Hi,

My name is Franz Devantier, creator of this blog.  I am an Oracle Certified Professional (OCP DBA 11g) Security DBA.

When I started working as an oracle DBA in 2000, it was always time-consuming, gathering the needed information to perform my day-to-day duties. 

I spent many hours, and sat up into the early hours of the night frequently; in order to get all the information together, that I needed.  Eventually after a number of years, I knew most of the issues that could occur, and so the amount of research became minimal.

In this blog, I will give you the details of how to carry out the common tasks, and also the not so common tasks of a Database Administrator.  In this blog, I present information in an easy to understand format; that could potentially save you hundreds of hours in research.

OFA – Optimal Flexible Architecture
(Part 3, Operating System Configuration)

OFA – Notes
Name the home directories using a matching pattern like this:
/AB/h/u
AB = mount point
h    = selection from a finite set of directory names
u    = user or owner of the directory

For example:
Oracle software owner, home directory:
/AB/h/u
Or 
/u02/app/oradev

The applications software directory could be:  
/u02/app/appldev

This means that we can place large home sub-trees, at the same directory level, but on separate mount-points.  This satisfies the first part of OFA Standard 5, by allowing different home directories to be distributed across different mount points.  At the same time we comply with the requirement to have sufficient space on the file systems, or drives.  This means that we can access these two large subdirectories with a single substitution pattern.  For example:  /u02/app/*

To satisfy the second part of OFA standard 5; which states that the contents of an individual home directory, must be able to be distributed across two or more disk drives.
One solution is to use symbolic links, to make directories appear in a single sub-tree; despite the fact that they physically reside on a different mount point.  You can achieve a similar effect in windows, by creating shared directories, and mapping network drives.

To satisfy the standard of preventing performance bottlenecks, system administrators may change the value of “h” in the design  /AB/h/u.  Remember we said that “h” represented a finite selection of directory names like “app” and “home”.

So let’s say that all application users were in the schema “/*/app/*/”, and the normal logon users were in the schema “/*/home/*/”.
For example:
/u01/app/*
/u02/app/*
/u03/app/*

and

/u01/home/*
/u02/home/*
/u03/home/*

Then the system administrator can backup all application level user data using this search string: /*/app/*, and all the logon users data using this search string: /*/home/*.  Does that make sense?

So far we have complied with the following OFA Standards:
  1. Sufficient space
  2. Performance bottlenecks
  3. Minimizing hardware costs
  4. Minimizing the impact of disk failure
  5. Home directories can be spread across multiple mount points, and the contents of one home directory can be distributed across several mount points.

Franz Devantier,
Need a database health check?

Income stabilizer (win-win opportunity)
Join the carefully selected and tested cash-flow generating program below to potentially create a long-term residual or annuity type income enhancer for yourself.

Traffic Wave - Free Report:  The report will give you all the information you need to start making a nice long-term residual income stream for yourself.


Tuesday, August 28, 2012

Oracle: OFA Part 2

Dear Readers,

My name is Franz Devantier, creator of this blog.  I am an Oracle Certified Professional (OCP DBA 11g) Security DBA.  I will be sharing with you the basic duties of an Oracle DBA, and also some of the undocumented, and not so well known tasks. 

OFA – Optimal Flexible Architecture
(Part 2, Operating System Configuration)

OFA – Note 1
Name the mount points in such a way, that you get an idea of the contents by looking at the name.
For example:  /AB
A -: Site-Specific identifier
B -: Content-Specific identifier

On Salaris /etc/vfstab;  shows the mapping of mount point names to device names.  Each row in the vfstab represents the mapping of the mount point to a device name/disk slice. 

On windows it will be something like Drive’s C:  D:  E:  F:  G:

If two or more applications live together in the same mount point or drive, obviously not in the same directory, but a sub-tree of the mount point/drive directory.  Then the base directory, should not name one application and not the other.

For example if the sub-tree directories contained <datafiles> on one branch and <datapump> on another branch, then the name of the root directory should describe as far as possible, both the sub-trees content, and not just one to the other, or a neutral alternative.
“/oracle/datafiles” would give the impression that there were only oracle files involved; and “/backups/datapump” would create the impression that there were only backup files in this structure.

So you would need a neutral base directory name like /u01 or /u02.  “/u02/oracle/datafiles”  and “/u02/backups/datapump”.  Now there is no need to become confused, and make an expensive mistake.

Storing files from two or more databases, on the same drive; compromises the best practice of avoiding performance bottlenecks.  The best practice of minimizing the impact of disk failure is also compromised, because now you stand to lose more than one database, if you lose one disk.
To minimize performance bottlenecks, with multiple databases on one computer, you will need to install additional hard drives.  To minimize the impact from disk failure, you will also need to install additional hard drives.  Without additional resources, your database environment is compromised.  The best thing to do in such a case is to prepare a document, describing the level of risk that the business is taking with their database installation.  If the business decides not to provide the needed hardware resources, then you must keep your email correspondence for when the disk fails; so that you can prove, that you did warn them about the possibility.

Remember that mount points abstract the underlying hardware, as don’t name them in such a way as to mirror the underlying hardware.

Login Home Directories
On the older UNIX computers, home directories were placed in “/usr”.  The problem with this arrangement, is that the entire “/usr” sub-tree structure may be replaced during upgrades.

A better login home directory would be “/u” or “/home”.  Properly sizing of file systems or drives will make things run a lot smoother in the long term operation of a database installation.

In Oracle apps (ebiz), the content of the database and application has grown bigger and bigger in time.  It is even more important to properly size the hardware, before we start installing Oracle.  With VLDB installations where the databases are in excess of a few Terabytes, and moving up towards a few hundred Terabytes; it becomes even more important to properly design the Oracle Database  installation.

Best practise requires that it is possible to exchange hardware components, without having to rebuild the applications that refer to these hardware components.

OFA Standard 5; requires that it is possible to distribute the collection of home directories as well as the contents of an individual home directory, across two or more disk drives.

Franz Devantier,
Need a database health check?

Income stabilizer (win-win opportunity)
Join the carefully selected and tested cash-flow generating program below to potentially create a long-term residual or annuity type income enhancer for yourself.

Traffic Wave - Free Report:  The report will give you all the information you need to start making a nice long-term residual income stream for yourself.


Sunday, August 26, 2012

Oracle: OFA (Optimal Flexible Architecture) Part 1

Dear Readers,

My name is Franz Devantier, creator of this blog.  I am an Oracle Certified Professional (OCP DBA 11g) Security DBA.  I will be sharing with you the basic duties of an Oracle DBA, and also some of the undocumented, and not so well known tasks. 

OFA – Optimal Flexible Architecture
(Part 1, Operating System Configuration)

The OFA Standard is a set of configuration guidelines that will give you faster, more reliable Oracle databases that require less work to maintain.  The “Optimal Flexible Architecture’’ described in the OFA Standard is built into the Oracle configuration tools and documentation on all open systems ports.  If a system is set up using OFA, then you will understand the layout, regardless of which configuration is used.  You will for example, immediately know where the alert log is, and the network configuration files etc, without having to search through the directories for the files.  If you have ever worked on a site, where the OFA standard was not implemented, then you would have found that you needed to do a lot of extra work, just to find where the various components of the Oracle implementation were.

Without OFA, these are the issues that you can expect: 
  • Routine administrative tasks like software and data backup are performed incorrectly or inefficiently, which could result in severe database and business losses.
  • Having multiple databases on one physical server can be a nightmare, and even result in corruption of production data.
  • Difficulty in administering database segment growth can result in repeated application failure, until you get the configuration into a workable condition.
  • Sub-Optimal database performance.

Therefore an entire suite of problems can be avoided by applying tried and tested knowledge of how to optimize Oracle’s relationship with its host operating system.  Oracle does most of this automatically for you, from Oracle 7 onwards, with the configuration and installation tools.  However you still have the opportunity to over-ride this set of best practices, if you are required to do so.

If you are installing an Oracle Database, then you are probably going to want to install the maximum sized database and application that your specific computer can handle. 

If you are only intending to use a fraction of your computer resources for the database, then you should consider “Oracle VM”.   With Oracle VM, you can make optimal use of your existing architecture.  That is to say, instead of buying additional hardware, you can virtualize your existing architecture.  This way you can optimize the use of your current hardware.

Before you can install the Oracle database, you must connect and configure the peripheral  devises on the computer.  You must configure disk swap space, as well as your file systems or directories that you are going to use.  Plan your data storage space, and get your network up and running.  Check the installation manual, for the specific operating system.  Make sure you have implemented all the pre-requisites before you start installing Oracle on the Computer.  Windows typically has few pre-requisites, whereas, Unix/Linux may have substantial pre-requites.  So I will probably focus more on Unix than Windows.

Mount Points
You need to create names and potential sizes for your file system mount points.  In Windows, you would look at your drives, and decide on which drive to place which component.  You need to decide where to put your applications, data, Oracle executables, archive log files, and control files, amongst other things.

  1. Standard 1 - Sufficient Space:  There must be sufficient space on the file-system’s or disk drives, for projected database growth.  You must determine, how the configuration will be affected, if you add additional hardware like disk drives.
  2. Standard 2 - Performance Bottlenecks:  The aim is to minimize performance bottlenecks.  To do this you need to be able to distribute the I/O load over as many physical disk drives as possible.  If you are using a SAN, or ASM(Automatic Storage Management), RAID devices, or volume managers, then your level of control over this will be limited.
  3. Standard 3 - Minimize Hardware Costs:  You can minimize hardware costs, by right-sizing you hardware devices, or if you already have available hardware, you can consider virtualization.  Oracle VM, in my opinion is the best way to go.  Oracle VM is basically improved and re-packaged XEN hypervisor, VM’s, and Oracle Linux or Oracle Solaris for the Operating system.  Another popular choice for virtualization is ESX hypervisor.  Personally I would not implement the windows HyperV, as a virtual platform, at this point in time.  However as the new products come out, it would be good to keep re-evaluating them.
  4. Standard 4 - Minimize the impact of Disk Failure:  You should have redo log files, and control files, duplexed on separate file systems or physical disk drives.  Keeping your backups on different physical devices to your live data-files is essential. 

Balance the cost of having performance bottlenecks, against the cost of minimizing the hardware costs. 

Franz Devantier,
Need a database health check?

Income stabilizer (win-win opportunity)
Join the carefully selected and tested cash-flow generating program below to potentially create a long-term residual or annuity type income enhancer for yourself.

Traffic Wave - Free Report:  The report will give you all the information you need to start making a nice long-term residual income stream for yourself.


Friday, August 24, 2012

Oracle: PL/SQL

Dear Readers,

My name is Franz Devantier, creator of this blog.  I am an Oracle Certified Professional (OCP DBA 11g) Security DBA.  I will be sharing with you the basic duties of an Oracle DBA, and also some of the undocumented, and not so well known tasks. 

I will make a deal with you:  If you refer me to a company that needs database support, from a few hours per week to full time, and I am able to sign a contract with them.
Then I will give you 10% of the monthly contract or deal price every month.  When the contract ends, and we re-sign the contract, I will again give you 10% of the monthly contract price.  This will go on until the company no longer employs or contracts me or my agents to look after their databases.
I can do this, because that 10% is my marketing budget.  When we re-sign the contract, in the future, it may depend on you giving the thumbs up again, and that is worth 10% of the monthly contract price, to be given to you as commission.
Contact: Franz

PL/SQL – Programming Language SQL
(Introduction)

PL/SQL is tightly integrated with SQL.  SQL is the most widely used database manipulation language.
  • PL/SQL allows you to use all of the SQL data manipulation capabilities.  It allows you to manipulate transactions and cursors.  You can access all the SQL functions and operators.  You also have access to all the pseudo columns like ‘sysdate’.
  • Most importantly PL/SQL supports all of the SQL data types, like ‘NUMBER’, ‘VARCHAR2’; as well as its own data types, like %TYPE, %ROWTYPE etc.
  • One outstanding feature of PL/SQL, is that it lets you run a SQL query, in a cursor; and then process the rows of that cursor one by one in a loop.

PL/SQL can potentially be written to perform well, in high performance environments.
  • You can send a block of statements to the database, which will reduce traffic between the database and the application.
  • You can use bind variables, which allows Oracle to re-use the execution paths of SQL statements, which can substantially improve performance.  Without the bind variables, Oracle would need to repeatedly re-parse the SQL statements.
  • Sub-Programs, are compiled once, and can be executed repeatedly.  Also sub-program compiled code can be shared between multiple users at runtime.  This optimizes the performance, and reduces overhead, increasing performance substantially.
  • The Optimizer for the PL/SQL compiler, can re-arrange the compiled code for higher performance.
  • For optimum performance, you need to design your code for performance, using the high performance features of PL/SQL.

PL/SQL code in different constructs, such as procedures and triggers, is very much the same.  The code you will create in SQL*Developer, or Oracle*Forms, or a discreet procedure in SQL*Plus, or a database trigger, is very much the same; which helps to make you more productive.  The good news is that PL/SQL is exactly the same regardless of which version of Unix/Linux or Windows or other operating system you are running your Oracle database on.

Because PL/SQL stored procedures are stored on the database server, instead of in the client applications; they are easier to manage, and deliver a higher performance.

To me, one of the exciting things about PL/SQL, is that you can use it to generate web-pages directly from the database.  This means that you can implement an internet based application, using only PL/SQL; you will however need, PL/SQL gateway and PL/SQL Web Toolkit, to produce a high class application.

The Features of PL/SQL
  • You combine the power of SQL, with the power of a procedural language.
  • You can declare variables, cursors and constants.  You can control the flow of programs and sub-programs, and trap errors.
  • A PL/SQL program exists within a block.  The most important parts of the block are ‘BEGIN’ and ‘END’.  All the code is contained within this area.  Optionally you can start the block with a ‘DECLARE’ section, and have an ‘EXCEPTION’ section just before the ‘END’ keyword.  You can even start off by having a label, just before the top of the BLOCK.


For Example:
<Label>  - optional
DECLARE – optional
   Local types, variables, subprograms and cursors are defined here.

BEGIN – required, this is where the executable part of the PL/SQL starts
   Statements

EXCEPTION – optional, this is where your error handing is done.

END; - required, this denotes the end of the block.

  • A sub-program is a named PL/SQL block.  It can be either a ‘FUNCTION’, which must return a value, or a ‘PROCEDURE’, which can optionally return a value.
  • A Package is a collection of sub-programs, that can be invoked from the package.  To invoke a specific sub-program within a package, you will call it like this:  <package_name>.<sub-program_name>(parameters)
  • A trigger is a named PL/SQL block, that is invoked, in response to a specific event in the database.
  • To view direct output from the PL/SQL program, you can use amongst other methods, the DBMS_OUTPUT package to view.
  • There are three categories of control statements in PL/SQL.  These three constructs, enable a fully functional programming language, when combined with SQL.
    • Conditional statements
    • Loop statements
    • Sequential control statements

Franz Devantier,
Need a database health check?

Income stabilizer (win-win opportunity)
Join the carefully selected and tested cash-flow generating program below to potentially create a long-term residual or annuity type income enhancer for yourself.

Traffic Wave - Free Report:  The report will give you all the information you need to start making a nice long-term residual income stream for yourself.


Wednesday, August 22, 2012

Oracle: Buffer Cache

Dear Readers,

My name is Franz Devantier, creator of this blog.  I am an Oracle Certified Professional (OCP DBA 11g) Security DBA.  I will be sharing with you the basic duties of an Oracle DBA, and also some of the undocumented, and not so well known tasks. 

I will make a deal with you:  If you refer me to a company that needs database support, from a few hours per week to full time, and I am able to sign a contract with them.
Then I will give you 10% of the monthly contract or deal price every month.  When the contract ends, and we re-sign the contract, I will again give you 10% of the monthly contract price.  This will go on until the company no longer employs or contracts me or my agents to look after their databases.
I can do this, because that 10% is my marketing budget.  When we re-sign the contract, in the future, it may depend on you giving the thumbs up again, and that is worth 10% of the monthly contract price, to be given to you as commission.
Contact: Franz

Buffer Cache

The buffer cache is one of the most important areas in the SGA(System Global Area) memory area.  The buffer cache(part of the SGA) holds copies of data blocks, so that they can be accessed quicker by Oracle than by reading them from the disk.  The assumption here is that it is quicker to read a block that is already in memory, than to read that same block from a storage device, such as a disk.

So essentially; the Buffer Cache is designed to minimize Physical Disk I/O.  The database buffer cache is logically segmented into a number of sets.  Having multiple sets reduces contention on multiprocessor systems, which seems to make more and more sense; as chip technology continues to improve.

So the buffer cache consists of database blocks, that have been read into memory.  Actually these are copies of database blocks in memory.  If these blocks have not been changed or updated, then they are eventually aged out of memory, without being written back to the database blocks on disk. 

There is a special mechanism that Oracle uses to age the oldest and least used database blocks, out of memory, to make space for new database blocks to be read into memory.  You can think of this mechanism as a list of database blocks in memory.  The one side of the list is known as the MRU(Most Recently Used), and the other side of the list is known as the LRU(Least Recently Used).

Now let’s imagine that a block is read from disk and into the database buffer cache.  If the buffer cache is sparsely populated, as for example when the database has just started up, then the block will be registered on the MRU end of the list.  However if the buffer cache is full, then a block on the LRU side of the list must first be moved out of the buffer cache.  If this buffer has not been changed, then it can simply be discarded, however if it has been changed, then it must first be written back to disk.  It will not always be one block at a time, but could potentially be many blocks at a time.

The parameter Db_file_multiblock_read_count will typically be set to the maximum allowed by the operating system by default.  A value of 8 is used by Oracle for costing or internal calculations.  It is calculated like this:  db_file_multiblock_read_count = (maximum I/O chunk size) / (db_block size).  From 10GR2 onwards, Oracle recommends that you do not set this parameter, but allow Oracle to determine the optimal value for itself.  Actually this takes away the headache of trying to calculate the “Maximum I/O chunk size”.

Now here’s where it gets interesting, block’s read from disk during a full table scan are placed on the LRU side of the list, instead of the MRU side of the list.  This prevents scattered data-blocks that are frequently used from being aged out of the buffer cache, during a full table scan.  We will refer to the list as the LRU list.

The time that a block was last touched or accessed is also recorded.

Actually there are three buffer pools, within the buffer cache, which are used for different purposes.  Firstly there is the “Keep Pool”, which is designed for smaller objects, like small lookup tables.  Secondly there is the “Recycle Pool”, which is designed for larger objects.  Lastly most of the buffers are stored in the “Default Pool”.  Each pool has its own LRU list.  Each pool’s LRU list is divided into a hot area, and a cold area.  So buffers in the hot part of the LRU list are called hot buffers, and buffers in the cold part of the LRU list are called cold buffers.  50% of the buffers are in the Hot area, and 50% in the cold area of the LRU list by default.  Actually if you are using multiple block sizes in your database, then you would have an additional pool for each non-default block size.  For example your database may have by default 8KB block sizes, so you could also create tablespaces to store blocks of 2KB, 4 KB, 16KB and 32 KB.  There are overheads involved, with adding additional block sizes to your database, so there must be a reason for implementing it.

New single blocks are placed in the middle of the LRU list(as a hot buffer), this is also known as “Midpoint Insertion”.

Every buffer in the list has a touch count as well.  Mainly when a buffer is accessed, the touch count is increased.  So buffers with a higher touch count are more important, and tend to be kept in the buffer cache in preference to buffers with a low touch count.  Most of the operations on the LRU list are protected by latches, except for the Touch Count.  This means that the Touch Count may not always be updated, if another process updates the buffer header, while an attempt is made to update the touch count.

Finding the Optimal size for the Buffer Cache, involves many considerations.  Obviously measuring the buffer cache hit ratio is one consideration.  Another consideration is that the bigger the Buffer Cache gets the more overhead to manage the LRU list.  There is also another list to manage called the “Dirty List”, which consists of buffers that have been changed.  The larger the Buffer Cache becomes, the longer the search for free buffer becomes.  This is measured by another metric called the “Buffer Busy Waits”.  Also a bigger buffer cache gives more work to the database writer (DBWn) background processes.  So finding the optimal buffer cache size, can be a fine line between too small and too big.  If the buffer cache hit ratio is anywhere above 90%, then you have pretty much achieved the goal; provided there are no side effects, as a result of the sizing.

Look out for blog entry “Buffer Cache Part 2”, where we will look at other aspects of the Buffer Cache.

Franz Devantier,
Need a database health check?

Income stabilizer (win-win opportunity)
Join the carefully selected and tested cash-flow generating program below to potentially create a long-term residual or annuity type income enhancer for yourself.

Traffic Wave - Free Report:  The report will give you all the information you need to start making a nice long-term residual income stream for yourself.