Closed cgzones closed 1 year ago
Thanks. I initially misunderstood the issue you were fixing. This happens in a case where hardened_malloc is already globally initialized, the process obtains a small (slab) allocation from it and then calls realloc
on that in a thread which hasn't made an allocation with malloc yet.
This was overlooked when adding support for arenas and GrapheneOS currently doesn't use arenas with hardened_malloc since it uses too much memory for our usage when slab allocation quarantines are enabled. It uses a per-arena-per-size-class quarantine and sets the quarantine size based on the largest slab allocation size class so having extended size classes enabled greatly increases the memory dedicated to quarantines. Providing more configuration for slab allocation size classes and potentially optimizing their substantial impact on performance and memory usage is becoming a priority.
We'll tag a new release of the standalone hardened_malloc soon to get the fix to people using the stable releases. Would have done it already but things are not going particularly well in terms of the ongoing harassment, fabricated stories, swatting attacks, etc. and it's very difficult to get basic things done.
If N_ARENA is greater than 1
thread_arena
is initially to N_ARENA, which is an invalid index intoro.size_class_metadata[]
.The actual used arena is computed in init().
Ensure init() is called if a new thread is only using realloc() to avoid UB, e.g. pthread_mutex_lock() might crash due the memory not holding an initialized mutex.
Affects mesa 23.2.0~rc4.
Example back trace using glmark2 (note
arena=4
with the default N_ARENA being 4):