Closed ysbaddaden closed 6 years ago
I started working on this. I'll most probably use stdatomic.h from C11 for the counters. not until we need concurrency (counters will need atomics, block lists either atomics or locks and large allocation locks).
Fixed in 62b94bfd4eaf605cfa35406d477b1cbf57f90589
Whenever we can't allocate an object, the GC will collect memory, then try again and eventually grow the memory. This works wonderfully in programs with small lived allocations, the collector will usually reclaim enough memory, and sometimes fail and fallback to grow the HEAP.
This works poorly when memory usage is rapidly growing, and the HEAP memory gets quickly filled with long-lived allocations that will stay reachable, and each collection won't collect much; in the best scenario it won't collect enough and grow the memory immediately, but in the worst scenario it will collect enough for the current allocation, but not enough for the following one, which will... collect again. Now let's repeat that, and we get horrible performance. In such cases, initializing the HEAP to a large value usually fixes the issue.
The issue is currently mitigated thanks to Overflow Allocation, which always allocates into free blocks, and prefers to grow memory over running a collection —but performance is still degraded, and it should try a collection before trying to grow memory, too.
We need a mechanism to skip collections in rapidly growing memory. For example BDW counts how much memory was allocated since the last collection, if the count is below a certain threshold (compared to the total HEAP currently allocated) then it won't try to collect and will instead grow the HEAP. That sounds like a good idea.