Chapter3:MemoryManagement
The kernel restarts counting at 0 when the maximum number of colors is reached; this automatically
results in a zero offset.
The required memory space is allocated page-by-page by the buddy system using thekmem_getpages
helper function. The sole purpose of this function is to invoke thealloc_pages_nodefunction discussed
in Section 3.5.4 with the appropriate parameters. ThePG_slabbit is also set on each page to indicate
that the page belongs to the buddy system. When a slab is used to satisfy short-lived or reclaimable
allocations, the flag__GFP_RECLAIMABLEis passed down to the buddy system. Recall from Section 3.5.2
that this is important to allocate the pages from the appropriate migrate list.
The allocation of the management head for the slab is not very exciting. The relevantalloc_slabmgmt
function reserves the required space if the head is stored off-slab; if not, the space is already reserved on
the slab. In both situations, thecolouroff,s_mem,andinuseelements of the slab data structure must be
initialized with the appropriate values.
The kernel then establishes the associations between the pages of the slab and the slab or cache structure
by invokingslab_map_pages. This function iterates over allpageinstances of the pages newly allo-
cated for the slab and invokespage_set_cacheandpage_set_slabfor each page. These two functions
manipulate (or misuse) thelruelement of apageinstance as follows:
mm/slab.c
static inline void page_set_cache(struct page *page, struct kmem_cache *cache)
{
page->lru.next = (struct list_head *)cache;
}
static inline void page_set_slab(struct page *page, struct slab *slab)
{
page->lru.prev = (struct list_head *)slab;
}
cache_init_objsinitializes the objects of the new slab by invoking the constructor for each object assum-
ing it is present. (As only a very few parts of the kernel make use of this option, there is normally little
to do in this respect.) Thekmem_bufctllist of the slab is also initialized by storing the valuei+1 at array
positioni: because the slab is as yet totally unused, the next free element is always the next consecutive
element. As per convention, the last array element holds the constantBUFCTL_END.
The slab is now fully initialized and can be added to theslabs_freelist of the cache. The number of new
objects generated is also added to the number of free objects in the cache (cachep->free_objects).
FreeingObjects
When an allocated object is no longer required, it must be returned to the slab allocator using
kmem_cache_free. Figure 3-52 shows the code flow diagram of this function.
kmem_cache_freeimmediately invokes__cache_freeand forwards its arguments unchanged.
(Again the reason is to prevent code duplication in the implementation ofkfree, as discussed in
Section 3.6.5.)
As with allocation, there are two alternative courses of action depending on the state of the per-CPU
cache. If the number of objects held is below the permitted limit, a pointer to the object in the cache is
stored.