Closed dmik closed 8 years ago
When it comes to memory allocation, there are two problems: memory itself and address space. Dynamic memory allocation is easy using the commit/release approach but that requires to reserve a fixed block of the maximum possible size in the address space. This is especially a problem if it's shared memory: If we reserve too much, other apps may start starving from the lack of address space to load more DLLs etc. If we reserve too few, then our app may run out of memory at some stage.
The solution is to use both approaches together: start with some small block with even a smaller committed area. Then commit more, then, when out of block size, reserve another block and start commit ting there as well. However if it's shared memory, there is another problem: we need to give access to newly reserved blocks to other processes using LIBCX. It's not a big deal but for this we will have to track PIDs of all using processes.
I think that will start with the commit/release approach first by reserving a 1MB block per LIBCX and then we will see. Given the fixed fcntl region joining (#2), this may be much more than LIBCX ever needs. Of course if we add more functionality, we may need more memory.
Note that the reserved buffer is now 2MB — but we allocate (commit) only 65K initially, the rest is committed as needed. The task of making the reserved buffer dynamic is moved to #9 — not too much relevant now. Reserving 2MB in the address space in the high memory doesn't look too much and 2MB should be enough for very heavy usage of Samba after #2 is done.
The current size of the shared heap used by fcntl locking is 64K. One lock/free record is roughly 20 bytes which gives us about 3280 locks per all files and all processes (and when we run out of memory we return ENOLCK). This looks like a lot at first sight but practice shows it can be exceeded. One example is
tdbtorture -n 10
(this is a stress test from Samba). This is partly related to fragmentation (see #2 for details) and once fragmentation is reduced memory requirements will also reduce (it is unlikely that the application may need more than 4k locks at the same time). However, a possibility to run out of memory is still there so we should implement dynamic heap growth.