Chapter3:MemoryManagement
48 59
36
24
12
0
47
35
23
11
Figure 3-24: Fragmentation of physical memory.
Assume that the memory consists of 60 pages — clearly, this is not going to be the key component
to the next supercomputer, but is fair enough for the sake of example. The free pages are scattered
across the address space on the left-hand side. Although roughly 25 percent of the physical mem-
ory is still unallocated, the largest continuous free area is only a single page. This is no problem for
userspace applications: Since their memory is mapped over page tables, it will always appear continu-
ous to them irrespective of how the free pages are distributed in physical memory. The right-hand side
shows the situation with the same number of used and free pages, but with all free pages located in a
continuous area.
Fragmentation is, however, a problem for the kernel: Since (most) RAM is identity-mapped into the
kernel’s portion of the address space, it cannot map an area larger than a single page in this scenario.
While many kernel allocations are small, there is sometimes the need to allocate more than a single page.
Clearly, the situation on the right-hand side, where all reserved and free pages are in continuous regions,
would be preferable.
Interestingly, problems with fragmentation can already occur when most of the memory is still unallo-
cated. Consider the situation in Figure 3-25. Only 4 pages are reserved, but the largest contiguous area
that can be allocated is 8 pages because the buddy system can only work that allocation ranges that are
powers of 2.
0 7 15 23 31
Figure 3-25: Memory fragmentation where few reserved pages
prevent the allocation of larger contiguous blocks.
I have mentioned that memory fragmentation only concerns the kernel, but this is only partially true:
Most modern CPUs provide the possibility to work with huge pages whose page size is much bigger
than for regular pages. This has benefits for memory-intensive applications. When bigger pages are
used, the translation lookaside buffer has to handlefewer entries, and the chance of a TLB cache miss is
reduced. Allocating huge pages, however, requires free contiguous areas of physical RAM!
Fragmentation of physical memory has, indeed, belonged to the weaker points of Linux for an unusually
long time span. Although many approaches have been suggested, none could satisfy the demanding
needs of the numerous workloads that Linux has to face without having too great an impact on others.
During the development of kernel 2.6.24, means to prevent fragmentation finally found their way into
the kernel. Before I discuss their strategy, one point calls for clarification: Fragmentation is also known
from filesystems, and in this area the problem is typically solved by defragmentation tools: They analyze
the filesystem and rearrange the allocated blocks such that larger continuous areas arise. This approach
would also be possible for RAM, in principle, but is complicated by the fact that many physical pages