Dear Readers,
My name is Franz Devantier, creator of this blog. I am an Oracle Certified
Professional (OCP DBA 11g) Security DBA.
I will be sharing with you the basic duties of an Oracle DBA, and also
some of the undocumented, and not so well known tasks.
I will make a deal with you: If you refer me to a company that needs
database support, from a few hours per week to full time, and I am able to sign
a contract with them.
Then I will give you 10% of the monthly
contract or deal price every month. When
the contract ends, and we re-sign the contract, I will again give you 10% of
the monthly contract price. This will go
on until the company no longer employs or contracts me or my agents to look
after their databases.
I can do this, because that 10% is my
marketing budget. When we re-sign the
contract, in the future, it may depend on you giving the thumbs up again, and
that is worth 10% of the monthly contract price, to be given to you as
commission.
Contact:
Franz
Buffer Cache
The buffer cache is one of the most important areas in the SGA(System Global Area) memory area. The buffer cache(part of the SGA) holds copies of data blocks, so that they can be accessed quicker by Oracle than by reading them from the disk. The assumption here is that it is quicker to read a block that is already in memory, than to read that same block from a storage device, such as a disk.
So essentially; the Buffer Cache is designed to minimize Physical Disk I/O. The database buffer cache is logically segmented into a number of sets. Having multiple sets reduces contention on multiprocessor systems, which seems to make more and more sense; as chip technology continues to improve.
So the buffer cache consists of database blocks, that have been read into memory. Actually these are copies of database blocks in memory. If these blocks have not been changed or updated, then they are eventually aged out of memory, without being written back to the database blocks on disk.
There is a special mechanism that Oracle uses to age the oldest and least used database blocks, out of memory, to make space for new database blocks to be read into memory. You can think of this mechanism as a list of database blocks in memory. The one side of the list is known as the MRU(Most Recently Used), and the other side of the list is known as the LRU(Least Recently Used).
Now let’s imagine that a block is read from disk and into the database buffer cache. If the buffer cache is sparsely populated, as for example when the database has just started up, then the block will be registered on the MRU end of the list. However if the buffer cache is full, then a block on the LRU side of the list must first be moved out of the buffer cache. If this buffer has not been changed, then it can simply be discarded, however if it has been changed, then it must first be written back to disk. It will not always be one block at a time, but could potentially be many blocks at a time.
The parameter Db_file_multiblock_read_count will typically be set to the maximum allowed by the operating system by default. A value of 8 is used by Oracle for costing or internal calculations. It is calculated like this: db_file_multiblock_read_count = (maximum I/O chunk size) / (db_block size). From 10GR2 onwards, Oracle recommends that you do not set this parameter, but allow Oracle to determine the optimal value for itself. Actually this takes away the headache of trying to calculate the “Maximum I/O chunk size”.
Now here’s where it gets interesting, block’s read from disk during a full table scan are placed on the LRU side of the list, instead of the MRU side of the list. This prevents scattered data-blocks that are frequently used from being aged out of the buffer cache, during a full table scan. We will refer to the list as the LRU list.
The time that a block was last touched or accessed is also recorded.
Actually there are three buffer pools, within the buffer cache, which are used for different purposes. Firstly there is the “Keep Pool”, which is designed for smaller objects, like small lookup tables. Secondly there is the “Recycle Pool”, which is designed for larger objects. Lastly most of the buffers are stored in the “Default Pool”. Each pool has its own LRU list. Each pool’s LRU list is divided into a hot area, and a cold area. So buffers in the hot part of the LRU list are called hot buffers, and buffers in the cold part of the LRU list are called cold buffers. 50% of the buffers are in the Hot area, and 50% in the cold area of the LRU list by default. Actually if you are using multiple block sizes in your database, then you would have an additional pool for each non-default block size. For example your database may have by default 8KB block sizes, so you could also create tablespaces to store blocks of 2KB, 4 KB, 16KB and 32 KB. There are overheads involved, with adding additional block sizes to your database, so there must be a reason for implementing it.
New single blocks are placed in the middle of the LRU list(as a hot buffer), this is also known as “Midpoint Insertion”.
Every buffer in the list has a touch count as well. Mainly when a buffer is accessed, the touch count is increased. So buffers with a higher touch count are more important, and tend to be kept in the buffer cache in preference to buffers with a low touch count. Most of the operations on the LRU list are protected by latches, except for the Touch Count. This means that the Touch Count may not always be updated, if another process updates the buffer header, while an attempt is made to update the touch count.
Finding the Optimal size for the Buffer Cache, involves many considerations. Obviously measuring the buffer cache hit ratio is one consideration. Another consideration is that the bigger the Buffer Cache gets the more overhead to manage the LRU list. There is also another list to manage called the “Dirty List”, which consists of buffers that have been changed. The larger the Buffer Cache becomes, the longer the search for free buffer becomes. This is measured by another metric called the “Buffer Busy Waits”. Also a bigger buffer cache gives more work to the database writer (DBWn) background processes. So finding the optimal buffer cache size, can be a fine line between too small and too big. If the buffer cache hit ratio is anywhere above 90%, then you have pretty much achieved the goal; provided there are no side effects, as a result of the sizing.
Look out for blog entry “Buffer Cache Part 2”, where we will look at other aspects of the Buffer Cache.
Franz Devantier,
Need a database health check?
Income stabilizer (win-win opportunity)
Join the carefully selected and tested cash-flow generating
program below to potentially create a long-term residual or annuity type income
enhancer for yourself.
Traffic Wave - Free Report: The report will
give you all the information you need to start making a nice long-term residual
income stream for yourself.
|
No comments:
Post a Comment