slab: add opt-in caching layer of percpu sheaves
Specifying a non-zero value for a new struct kmem_cache_args field sheaf_capacity will setup a caching layer of percpu arrays called sheaves of given capacity for the created cache. Allocations from the cache will allocate via the percpu sheaves (main or spare) as long as they have no NUMA node preference. Frees will also put the object back into one of the sheaves. When both percpu sheaves are found empty during an allocation, an empty sheaf may be replaced with a full one from the per-node barn. If none are available and the allocation is allowed to block, an empty sheaf is refilled from slab(s) by an internal bulk alloc operation. When both percpu sheaves are full during freeing, the barn can replace a full one with an empty one, unless over a full sheaves limit. In that case a sheaf is flushed to slab(s) by an internal bulk free operation. Flushing sheaves and barns is also wired to the existing cpu flushing and cache shrinking operations. The sheaves do not distinguish NUMA locality of the cached objects. If an allocation is requested with kmem_cache_alloc_node() (or a mempolicy with strict_numa mode enabled) with a specific node (not NUMA_NO_NODE), the sheaves are bypassed. The bulk operations exposed to slab users also try to utilize the sheaves as long as the necessary (full or empty) sheaves are available on the cpu or in the barn. Once depleted, they will fallback to bulk alloc/free to slabs directly to avoid double copying. The sheaf_capacity value is exported in sysfs for observability. Sysfs CONFIG_SLUB_STATS counters alloc_cpu_sheaf and free_cpu_sheaf count objects allocated or freed using the sheaves (and thus not counting towards the other alloc/free path counters). Counters sheaf_refill and sheaf_flush count objects filled or flushed from or to slab pages, and can be used to assess how effective the caching is. The refill and flush operations will also count towards the usual alloc_fastpath/slowpath, free_fastpath/slowpath and other counters for the backing slabs. For barn operations, barn_get and barn_put count how many full sheaves were get from or put to the barn, the _fail variants count how many such requests could not be satisfied mainly because the barn was either empty or full. While the barn also holds empty sheaves to make some operations easier, these are not as critical to mandate own counters. Finally, there are sheaf_alloc/sheaf_free counters. Access to the percpu sheaves is protected by local_trylock() when potential callers include irq context, and local_lock() otherwise (such as when we already know the gfp flags allow blocking). The trylock failures should be rare and we can easily fallback. Each per-NUMA-node barn has a spin_lock. When slub_debug is enabled for a cache with sheaf_capacity also specified, the latter is ignored so that allocations and frees reach the slow path where debugging hooks are processed. Similarly, we ignore it with CONFIG_SLUB_TINY which prefers low memory usage to performance. [boot failure: https://lore.kernel.org/all/583eacf5-c971-451a-9f76-fed0e341b815@linux.ibm.com/ ] Reported-and-tested-by: Venkat Rao Bagalkote <venkat88@linux.ibm.com> Reviewed-by: Harry Yoo <harry.yoo@oracle.com> Reviewed-by: Suren Baghdasaryan <surenb@google.com> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
This commit is contained in:
parent
9d4e6ab865
commit
2d517aa09b
|
@ -335,6 +335,37 @@ struct kmem_cache_args {
|
|||
* %NULL means no constructor.
|
||||
*/
|
||||
void (*ctor)(void *);
|
||||
/**
|
||||
* @sheaf_capacity: Enable sheaves of given capacity for the cache.
|
||||
*
|
||||
* With a non-zero value, allocations from the cache go through caching
|
||||
* arrays called sheaves. Each cpu has a main sheaf that's always
|
||||
* present, and a spare sheaf that may be not present. When both become
|
||||
* empty, there's an attempt to replace an empty sheaf with a full sheaf
|
||||
* from the per-node barn.
|
||||
*
|
||||
* When no full sheaf is available, and gfp flags allow blocking, a
|
||||
* sheaf is allocated and filled from slab(s) using bulk allocation.
|
||||
* Otherwise the allocation falls back to the normal operation
|
||||
* allocating a single object from a slab.
|
||||
*
|
||||
* Analogically when freeing and both percpu sheaves are full, the barn
|
||||
* may replace it with an empty sheaf, unless it's over capacity. In
|
||||
* that case a sheaf is bulk freed to slab pages.
|
||||
*
|
||||
* The sheaves do not enforce NUMA placement of objects, so allocations
|
||||
* via kmem_cache_alloc_node() with a node specified other than
|
||||
* NUMA_NO_NODE will bypass them.
|
||||
*
|
||||
* Bulk allocation and free operations also try to use the cpu sheaves
|
||||
* and barn, but fallback to using slab pages directly.
|
||||
*
|
||||
* When slub_debug is enabled for the cache, the sheaf_capacity argument
|
||||
* is ignored.
|
||||
*
|
||||
* %0 means no sheaves will be created.
|
||||
*/
|
||||
unsigned int sheaf_capacity;
|
||||
};
|
||||
|
||||
struct kmem_cache *__kmem_cache_create_args(const char *name,
|
||||
|
|
|
@ -235,6 +235,7 @@ struct kmem_cache {
|
|||
#ifndef CONFIG_SLUB_TINY
|
||||
struct kmem_cache_cpu __percpu *cpu_slab;
|
||||
#endif
|
||||
struct slub_percpu_sheaves __percpu *cpu_sheaves;
|
||||
/* Used for retrieving partial slabs, etc. */
|
||||
slab_flags_t flags;
|
||||
unsigned long min_partial;
|
||||
|
@ -248,6 +249,7 @@ struct kmem_cache {
|
|||
/* Number of per cpu partial slabs to keep around */
|
||||
unsigned int cpu_partial_slabs;
|
||||
#endif
|
||||
unsigned int sheaf_capacity;
|
||||
struct kmem_cache_order_objects oo;
|
||||
|
||||
/* Allocation and freeing of slabs */
|
||||
|
|
|
@ -163,6 +163,9 @@ int slab_unmergeable(struct kmem_cache *s)
|
|||
return 1;
|
||||
#endif
|
||||
|
||||
if (s->cpu_sheaves)
|
||||
return 1;
|
||||
|
||||
/*
|
||||
* We may have set a slab to be unmergeable during bootstrap.
|
||||
*/
|
||||
|
@ -321,7 +324,7 @@ struct kmem_cache *__kmem_cache_create_args(const char *name,
|
|||
object_size - args->usersize < args->useroffset))
|
||||
args->usersize = args->useroffset = 0;
|
||||
|
||||
if (!args->usersize)
|
||||
if (!args->usersize && !args->sheaf_capacity)
|
||||
s = __kmem_cache_alias(name, object_size, args->align, flags,
|
||||
args->ctor);
|
||||
if (s)
|
||||
|
|
Loading…
Reference in New Issue