Linux Kernel Architecture

(Jacob Rumans) #1

Chapter 18: Page Reclaim and Swapping


18.1.3 Page-Swapping Algorithms


Over the last few decades, a whole host of algorithms has been developed for purposes of page swapping,
each of which has its own specific advantages and disadvantages. The general literature on operating sys-
tems includes detailed descriptions and analyses. Below, two techniques on which the Linux swapping
implementation is based are described.

SecondChance


Second chanceis an algorithm that is extremely simple to implement and that features a minor modifi-
cation to a classical FIFO algorithm. A system’s pages are managed in a linked list. When a page fault
occurs, the newly referenced page is placed at the beginning of the list; this automatically moves the
existing pages back by one position. Since only a finite number of positions are available in the FIFO
queue, the system must reach its capacity limit at some point or other. When it does, the pages at the end
of the queue ‘‘drop off’’ the list and are swapped out. When they are needed again, the processor triggers
a page fault that causes the kernel to read the page data in again and to place the page at the start of
the list.

For obvious reasons, this procedure is not particularly smart. When pages are swapped out, no account is
taken of whether the pages are used frequently or rarely. After a given number of page faults (determined
by how many places there are in the queue), the page is written into the swap area. If it is required
frequently, it is read in again immediately — not to the benefit of system performance.

This situation can be improved by offering a page asecond chancebefore it is swapped out. Each page is
assigned a special field containing a bit that is controlled by the hardware. When the page is accessed,
the bit is automatically set to 1. The software (kernel) is responsible for un-setting the bit.

When a page reaches the end of the list, the kernel does not immediately swap it out but first checks
whether the aforementioned bit is set. If it is, it is unset and the page is moved to the start of the FIFO
queue; in other words, it is treated like a new page that has been added to the system. If the bit is not set,
the page is swapped out.

Thanks to this extension, the algorithm does take minimum account of whether pages are used frequently
or not, but does not deliver the performance expected of state-of-the-art memory management. Never-
theless, the second chance algorithm is a good starting point when combined with other techniques.

LRU Algorithm


LRUis short forleast recently usedand refers to various algorithms that attempt to find least used pages
according to a similar scheme. This reverse approach avoids the need for more complex searching for
most used pages.

Clearly, pages frequently used over a short period in the recent past are likely to be used in the (near)
future. The LRU algorithm is based on the converse assumption that pages not used recently will not be
needed frequently in the immediate future. Such pages are therefore likely candidates for swap-out when
memory is scarce.

The fundamental LRU principle may be simple, but it is difficult to implement it appropriately. How
can the kernel mark or sort pages as simply as possible in order to estimate access frequency without
Free download pdf