r-lyeh-archived / ltalloc

LightweighT Almost Lock-Less Oriented for C++ programs memory allocator
BSD 3-Clause "New" or "Revised" License
164 stars 17 forks source link

chunk leaks #30

Open gmit3 opened 3 years ago

gmit3 commented 3 years ago

Hello,

I have a single allocation size and a single thread.

If I do this in a loop:

after ltsqueeze, the last allocated chunk will leak. This doesn't happen always, depends on a number of allocations performed and allocation size, but once it starts happening, it's consistent and reproducible and chunk will leak every iteration.

From what I see in the debugger, first ltsqueeze doesn't manage to free the last chunk that was half-used and then further ltsqueezes don't manage to do so for previous last chunks and the new last chunk.

If you don't have time to take a look at this yourself, could you perhaps give me some pointers what to look at? Tnx!

r-lyeh commented 3 years ago

Hey @gmit3. Hmmm... I mirrored this repo before @alextretyak (ltalloc's author) moved his repository on github. There are a few new commits ahead in the official repo, which we're lagging behind. You might try with latest ltalloc from https://github.com/alextretyak/ltalloc and see if that helps? Let me know if that proves to be sufficent. If it is, I'll happily merge the missing commits :D

r-lyeh commented 3 years ago

Pinging also @jlaumon for awareness, as he did some improvements & fixes sometime ago

gmit3 commented 3 years ago

There is only one commit done on 2020-01-22 and it doesn't really solve the problem I've found.

In meanwhile, I've tried to fix the bug myself and I simply cannot find what's wrong. If anyone is interested to take a look, I could prepare a test that demonstrates it...

jlaumon commented 3 years ago

I've never used ltsqueeze, so it's not something I would have noticed. I don't really have time investigating it these days, but I can recommend https://github.com/mjansson/rpmalloc, which is also simple to integrate (single file as well) and has similar performance (and the code is much nicer/maintained).

alextretyak commented 3 years ago

First, let me say that ltalloc_squeeze(0) does not mean "return all memory to the system", but rather means "return as much memory to the system as possible". And by design of ltalloc, ltalloc_squeeze() can not touch blocks in other threads' thread-local free lists, and it frees only memory from the central free lists. Theoretically, ltalloc_squeeze() could free blocks located in the current thread thread-local free lists, but this is rarely useful in practice.

after ltsqueeze, the last allocated chunk will leak

I wouldn't exactly call that a leak, because new allocations will take blocks from this "leaked" chunk, and new chunk will be obtained from the system only after "leaked" chunks will be depleted. That is, you can infinitely perform a loop (each iteration of which is allocating and freeing all allocated blocks), and after each of such iterations there should be totally just one "leeked" chunk.

alextretyak commented 3 years ago

Hm... It seems, that the last sentence [in previous my comment] is wrong:

  1. Blocks are moved between a thread cache and a central cache in batches, and a single batch can contain slightly more blocks than fit into one chunk. So, there is more than one chunk [for any given size class] may be contained in the thread cache.
  2. Free blocks in the thread-local free lists can belong to different chunks, and thus each block prevents corresponding chunk from being released/‘returned to the system’ via ltalloc_squeeze(0) call.
gmit3 commented 3 years ago

Hm, I understand what you say, but I have prepared a simple test project that shows the problem:

After every ltsqueeze, one more chunk stays alive. It seems they are being reused (so maybe it's not a leak after all), but the number keeps growing and the behaviour is strange.

In my sample, ltalloc.cc is modified to count alive chunks and LTALLOC_CHUNK_SIZE is set to 1Mb to show problem more clearly (it's not dependent on chunk size).

testltallocleak.zip