What is Disk Cache in Operating Systems

What is Disk Cache in Operating Systems

The Cache Memory is the intermediate memory between the processor and the main memory. It is very fast compared with the main memory. When the CPU needs a word, it first searches the Cache memory for that word. Disk cache is a buffer in the main memory containing the same copy of some of the recently used sectors of the disk. When an I/O request is made for a particular sector, the CPU/DMA/I/O processor checks the disk cache for that sector. If the sector is available then it will read data from cache memory. If a sector is not available then write that sector into the disk cache from disk and then it will be read by the CPU.

Let’s understand design disk cache design considerations.

1. Data Delivery

A process in main memory request for an I/O and disk cache data. When requested, data in the disk cache must be delivered to the requesting processes. This can be done in two ways:

i. Transferring the block of data from the disk cache to user process space.


ii. Passing a pointer to the user process about the address of a particular block in the disk cache. Then the process shares the data in the disk cache.


2. Block Replacement

It is the second design issue of the disk cache. If the requested sector is not available in the disk cache, then it can be transferred from the second disk to the disk cache. If the disk cache has empty space, then the block is loaded easily. If the empty space is not available then it should replace a block in the disk cache, like a page replacement. There are so many algorithms to replace a sector in the disk cache. The most popular algorithms are:

i. LRU: In the LRU (Least Recently Used) algorithm, a new block is replaced with a block that has not been used for the longest period of time. The disk cache maintains a stack of blocks:

  • The top of the stack occupies the most recently used block.
  • The bottom of the stack occupies the least recently used.

In this method, it will delete a block from the bottom and add a block to the top for block replacement. The main drawback of this method is the difficulty to remove a block from the bottom of the stack.

ii. LFU: In LFU (Least Frequently Used) algorithm, each block maintains a reference counter and for each block reference the counter is incremented. For example, if a block is used 10 times then the counter will be 10. When replacement is required then the block with the smallest count is selected and replaced with a new block. This algorithm is better than the LRU, but the main problem is some blocks rarely referenced, but because of short intervals of repeated references counter is incremented. For example, the for loop is repeated 100 times and this loop needs data from the disk cache. The loop referenced the block 100 times; therefore, the counter of that block is now 100. So, this block will reside much longer time in the disk cache.

To overcome this problem a new version of the algorithm is introduced. That is the Frequency Based Algorithm. Consider the figure below figure.


The stack is divided into 2 sections which are the new section and the old section. A collection of blocks at the top of the stack is called a new section, and at the bottom of the stack is called the old section. When there is a cache hit, the particular block is moved to the top of the stack. If the block was already in the new section, its count is not incremented otherwise it is incremented by 1. If the block is referenced in the loop within the new section, then the count is unchanged.

For replacement, a block with the smallest reference count and not in the new section is selected. In the enhanced version of frequency-based replacement, only blocks in the old section are eligible for replacement in the middle section. It allows frequently referenced blocks to build up their reference counts before becoming eligible for replacement.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top