Chapter3:MemoryManagement
Free pages count per migrate type at order 0 1 2 3 4 5 6 7 8 9 10
Node 0, zone DMA, type Unmovable 0 0 1 1 1 1 1 1 1 1 0
Node 0, zone DMA, type Reclaimable 0 0 0 0 0 0 0 0 0 0 0
Node 0, zone DMA, type Movable 3 5 6 3 5 2 2 2 0 0 0
Node 0, zone DMA, type Reserve 0 0 0 0 0 0 0 0 0 0 1
Node 0, zone DMA, type
Node 0, zone DMA32, type Unmovable 44 37 29 1 2 0 1 1 0 1 0
Node 0, zone DMA32, type Reclaimable 18 29 3 4 1 0 0 0 1 1 0
Node 0, zone DMA32, type Movable 0 0 191 111 68 26 21 13 7 1 500
Node 0, zone DMA32, type Reserve 0 0 0 0 0 0 0 0 0 1 2
Node 0, zone DMA32, type
Node 0, zone Normal, type Unmovable 1 5 1 0 0 0 0 0 0 0 0
Node 0, zone Normal, type Reclaimable 0 0 0 0 0 0 0 0 0 0 0
Node 0, zone Normal, type Movable 1 4 0 0 0 0 0 0 0 0 0
Node 0, zone Normal, type Reserve 11 13 7 8 3 4 2 0 0 0 0
Node 0, zone Normal, type
Number of blocks type Unmovable Reclaimable Movable Reserve
Node 0, zone DMA 1 0 6 1 0
Node 0, zone DMA32 13 18 2005 4 0
Node 0, zone Normal 22 10 351 1 0
Initializing Mobility-Based Grouping
During the initialization of the memory subsystem,memmap_init_zoneis responsible to handle thepage
instances of a memory zone. The function does some standard initializations that are not too interesting,
but one thing is essential: All pages are initially marked to be movable!
mm/page_alloc.c
void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
unsigned long start_pfn, enum memmap_context context)
{
struct page *page;
unsigned long end_pfn = start_pfn + size;
unsigned long pfn;
for (pfn = start_pfn; pfn < end_pfn; pfn++) {
...
if ((pfn & (pageblock_nr_pages-1)))
set_pageblock_migratetype(page, MIGRATE_MOVABLE);
...
}
As discussed in Section 3.5.4, the kernel favors large page groups when pages must be ‘‘stolen’’ from
different migrate zones from those the allocation is intended for. Because all pages initially belong to the
movable zone, stealing pages is required when regular, unmovable kernel allocations are performed.