wolfpld / tracy

Frame profiler
https://tracy.nereid.pl/
Other
10.24k stars 690 forks source link

Feature Request: Supporting bulk-deallocation in memory profiling #917

Open simonvanbernem opened 1 month ago

simonvanbernem commented 1 month ago

I often use pool allocators in my programs, which allow for many small allocations but ignore calls to free these allocations, because they will all be freed at once after it is known that the allocated objects are no longer needed. This can simplify and speed up memory management tremendously in certain cases, as you simply don't need to call free until the end.

(The allocator basically behaves like and std::vector in that you can push_back many times, but when destroying the vector, you don't need to shrink it back to 0, you just release the entire memory range )

Trying to instrument these allocators for tracy's memory profiling is a problem however, because there is only a single free event for many allocations. To satisfy the 1-to-1 requirement that tracy has, the allocator would need to keep a list of allocations, which negates the advantages of these types of allocators. But if I don't do this, all allocations of this allocator will appear as leaks (which is bad), AND if the memory allocator is reset, but reuses the memory, tracy will see that as two allocations for the same address without a free and stop the profile (which is worse).

To support memory profiling for this kind of allocation scheme easily, it would be great if tracy had a TracyCFreeRange function in addition to the existing TracyCFree function, that took an address and size and would automatically treat all live allocations in the provided memory range as deallocated.

wolfpld commented 1 month ago

Duplicate of #915.

simonvanbernem commented 1 month ago

I don't think it's a duplicate. #915 is about freeing the entirety of a pool, whereas having a address + size variant would allow to partially free a pool (which I failed to mention in my initial description though).

That use case is more niche, but I have done this with stack allocators.