diff options
| author | Vlastimil Babka <vbabka@suse.cz> | 2025-11-05 10:05:30 +0100 |
|---|---|---|
| committer | Vlastimil Babka <vbabka@suse.cz> | 2025-11-07 09:59:15 +0100 |
| commit | ea6b5e5778b1dc58b1909e4badd3e180ddae7418 (patch) | |
| tree | 0722396bb10208b755d57e058b28ef863af8f73d /scripts/kernel-doc.py | |
| parent | f6087b926aea65768975fd4cbc3775965cbd8621 (diff) | |
slab: move kfence_alloc() out of internal bulk alloc
SLUB's internal bulk allocation __kmem_cache_alloc_bulk() can currently
allocate some objects from KFENCE, i.e. when refilling a sheaf. It works
but it's conceptually the wrong layer, as KFENCE allocations should only
happen when objects are actually handed out from slab to its users.
Currently for sheaf-enabled caches, slab_alloc_node() can return KFENCE
object via kfence_alloc(), but also via alloc_from_pcs() when a sheaf
was refilled with KFENCE objects. Continuing like this would also
complicate the upcoming sheaf refill changes.
Thus remove KFENCE allocation from __kmem_cache_alloc_bulk() and move it
to the places that return slab objects to users. slab_alloc_node() is
already covered (see above). Add kfence_alloc() to
kmem_cache_alloc_from_sheaf() to handle KFENCE allocations from
prefilled sheafs, with a comment that the caller should not expect the
sheaf size to decrease after every allocation because of this
possibility.
For kmem_cache_alloc_bulk() implement a different strategy to handle
KFENCE upfront and rely on internal batched operations afterwards.
Assume there will be at most once KFENCE allocation per bulk allocation
and then assign its index in the array of objects randomly.
Cc: Alexander Potapenko <glider@google.com>
Cc: Marco Elver <elver@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Link: https://patch.msgid.link/20251105-sheaves-cleanups-v1-2-b8218e1ac7ef@suse.cz
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Diffstat (limited to 'scripts/kernel-doc.py')
0 files changed, 0 insertions, 0 deletions
