jump to navigation

Demystifying Buffer Management Strategy of Oracle March 27, 2012

Posted by Mich Talebzadeh in Oracle.
trackback

Until release 9.2, Oracle used an MRU/LRU algorithm as shown in figure 1 below. Blocks within the buffer cache were ordered from MRU (most recently used) blocks to LRU (least recently used) blocks. Whenever a block was accessed, the block went to the MRU end of the list, thereby shifting the other blocks down towards the LRU end. When a block was read from disk and when there was no buffer available in the db buffer cache, one block in the buffer cache had to “leave”. It was the block on the LRU end in the list. Blocks read during a full table scan were placed on the LRU side of the list instead of the MRU side. As an example if the buffer cache consisted of 100,000 blocks and a table of size 200,000 blocks was full table scanned, accessing every block, the blocks would be read into memory in a batch size defined by the parameter DB_FILE_MULTIBLOCK_READ_COUNT. These blocks would go immediately to the least used end of the MRU/LRU chain thereby preventing a large table scan from overwriting the entire buffer cache. Instead, only a small number of blocks (i.e. 2 to 32) would be overwritten leaving the remaining cached table blocks from other segments still in cache. Each pool’s LRU is divided into a hot area and a cold area. Accordingly, buffers within the hot area are hot buffers and the ones in the cold are called cold buffers. By default, 50% of the buffers belong to the cold area and the other 50% belong to the hot area. A newly read single data block will be inserted between the cold and the hot area such that it belongs to the hot area. This is called midpoint insertion. However, this is only true for single block reads. Multi block reads such as in table scan, were placed at the LRU end.

Figure 1: MRU/LRU replacement strategies of Oracle prior to release 9.2

Starting from 9.2 specific for table scans, the small window at the end of the LRU ceased to exist. For full table scans (FTS) blocks now go to Midpoint insertion as shown in Figure 2.  Oracle deploys a parameter called small table threshold that allows for special treatment of tables that are smaller than two percent of the size of the buffer cache. (My own testing showed that this small table threshold may not hold true, but that is another matter!)

I believe dealing with FTS depends on two factors:

  1.  The default behaviour of Oracle 11 optimizer that favours direct path reads in dealing with FTS. Direct read scans are full scans that bypass the buffer cache. Data is read directly from the disk into PGA.
  2. Turning off direct path reads. This will force the optimiser to use conventional path reads like db file scattered reads. In that case the way oracle decides to promote table blocks to hot area differs.

These are the summary of my findings:

  • With direct path read, full table scans for tables below 10% of buffer cache will have their blocks promoted to hot area and tables will be fully cached. This is around 5 times the small table threshold limit
  • With conventional path reads and assuming the availability of free buffers, a table of around 95% of buffer size can be cached

These findings are summarised in Figure 2 below

Figure 2: The current MRU/LRU replacement strategies of Oracle

Comments»

1. skills training - March 23, 2015

Fine way of explaining, and ood paragraph to take data concerning my presentation subject matter, which i am going to convey in academy.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: