Open iloop2020 opened 4 months ago
Hi,
Here is the procdure to reproduce the issue:
Settings: BLOCK_SIZE:1048576 BLOCK_COUNT:4095 CACHE_SIZE:1048576
Create 1024 file:
for i in $(seq 1 1024);
do
dd if=/dev/random of=disk/test$i.img bs=16 count=1 oflag=dsync
done
Will get stuck and the filesystem corruption.
Thank you for your help.
Hi @iloop2020, thanks for creating an issue.
This is a known issue with littlefs, it scales pretty terribly with large block sizes ($O(n^2)
$).
What is your block device? Some devices support a range of erase sizes, with varying names which can get a bit confusing.
metadata_max
may also be useful for artificially limiting the size of metadata blocks to prevent a performance cliff:
https://github.com/littlefs-project/littlefs/blob/d01280e64934a09ba16cac60cf9d3a37e228bb66/lfs.h#L271-L275
These issues may have a bit more info: https://github.com/littlefs-project/littlefs/issues/214, https://github.com/littlefs-project/littlefs/pull/502
Hi @geky ,
Thank you very much for your replay and support.
Block devices info: We are using the SDCard as the block device, the sector szie of 512 bytes.
Question: For our case with 1MB block size, what is the suggested metadata_max ?
Thank you.
Hi @iloop2020,
I don't have very useful numbers, so it's hard to say without a bit of trial and error.
A good starting value might be 4KiB, and decreasing if you still notice performance issues.
Hi,
I found a read performance drop with 1MB block size, after a number of new files are created.
I measured the performance drop, there are extra sector_read() call, especially on Block0 and Block1. Delete those newly created files don't recover the performance.
Thank you for advise.