google / kernel-sanitizers

Linux Kernel Sanitizers, fast bug-detectors for the Linux kernel
https://google.github.io/kernel-sanitizers/
442 stars 87 forks source link

[kfence] kmalloc allocations are rounded up to kmalloc cache size #73

Closed ramosian-glider closed 4 years ago

ramosian-glider commented 4 years ago

When serving kmalloc(size, ...) allocations, SLUB picks the kmalloc cache bigger than size in kmalloc_slab() and doesn't preserve the information about the requested size. E.g. when allocating 272 bytes, KFENCE can place the object either at the beginning of the page, or in the middle of the page (240 bytes to the left of the end of the page), making it impossible to detect OOBs to the right of that object.

melver commented 4 years ago

If we get to modify the fast-path this becomes solvable. We may need to pass an additional argument to __do_cache_alloc, but if it's unused the compiler will optimize it out.

ramosian-glider commented 4 years ago

Another thing that relies on kmalloc() returning fixed-sized objects is init_on_alloc/init_on_free. Right now they just wipe cache->size bytes, immediately causing an OOB on KFENCE objects.

For init_on_alloc this could be solved by plumbing the size down to the places where slab_want_init_on_alloc() is called. For init_on_free we'll need to disable SLAB initialization for KFENCE pool and perform it in KFENCE itself.

ramosian-glider commented 4 years ago

Another thing that relies on kmalloc() returning fixed-sized objects is init_on_alloc/init_on_free. Right now they just wipe cache->size bytes, immediately causing an OOB on KFENCE objects.

This also applies to simple __GFP_ZERO allocations.

ramosian-glider commented 4 years ago

We're passing the orig_size argument to slab_alloc_node() now, which solves this problem.