Closed bdrung closed 4 years ago
The super block initialize function did a greater-or-equal check instead of testing for greater than 1M.
This should be fixed by commit 3c79472a773210127bec1768aaf85e540daa3f9f
Tested and it works, but it took 6.3 GiB during it. Even with 16 GiB RAM, my first try was OOM killed, because not enough memory was available.
That is definitely odd. The block processor has a threshold for the internal queue and does a blocking wait for completion if the threshold is crossed. By default the tools set it to 10 number-of-threads, so for 12 threads and 1MiB block size, that should* be 120 MiB plus management overhead.
Can you reproduce it? The generated squashfs image is 238 MB big.
Yes, I managed to replicate it with a ~374 MiB SquashFS image I had sitting around. Memory consumption peaked at ~5GiB which is very strange. The unpacked files total to ~500 MiB, so even copying all input into memory shouldn't use that much RAM.
I also observed the same behaviour you mentioned in #30. Towards the end, when all blocks were submitted and gensquashfs
waited for them to be processed, only one worker thread was running at a time.
I managed to get some perf profiling data with a smaller image that confirms that towards the end some weird pattern is going with the worker threads once the main thread stops submitting blocks.
For both issues, I already have some suspicions as to what might be going on, but I will do some more memory and CPU profiling first.
Should be fixed with commit 6d4faedcb53f54253160f1717fac609f922ae0c7
Tested and confirmed. The memory consumption is now neglectable.
tar2sqfs fails if the block size is set to 1 MiB:
Reducing the block size to 512 KiB works. Using 1 MiB block size works with squashfs-tools:
I am using squashfs-tools-ng 0.7 on Ubuntu 19.10.