Closed ramosian-glider closed 4 years ago
If we get to modify the fast-path this becomes solvable. We may need to pass an additional argument to __do_cache_alloc, but if it's unused the compiler will optimize it out.
Another thing that relies on kmalloc() returning fixed-sized objects is init_on_alloc
/init_on_free
.
Right now they just wipe cache->size
bytes, immediately causing an OOB on KFENCE objects.
For init_on_alloc
this could be solved by plumbing the size down to the places where slab_want_init_on_alloc()
is called.
For init_on_free
we'll need to disable SLAB initialization for KFENCE pool and perform it in KFENCE itself.
Another thing that relies on kmalloc() returning fixed-sized objects is init_on_alloc/init_on_free. Right now they just wipe cache->size bytes, immediately causing an OOB on KFENCE objects.
This also applies to simple __GFP_ZERO
allocations.
We're passing the orig_size
argument to slab_alloc_node()
now, which solves this problem.
When serving
kmalloc(size, ...)
allocations, SLUB picks the kmalloc cache bigger thansize
inkmalloc_slab()
and doesn't preserve the information about the requested size. E.g. when allocating 272 bytes, KFENCE can place the object either at the beginning of the page, or in the middle of the page (240 bytes to the left of the end of the page), making it impossible to detect OOBs to the right of that object.