Open Antatip opened 2 years ago
Yes, this is one limitation of littlefs currently, though I do have plans to improve the performance by changing the current directory from a linked-list into a B-tree (the directory entries are currently stored in sorted order to make this change possible).
Though this has been low-priority since I was surprised to not hear this being a big issue for users. Though perhaps that is just due to more pressing performance issues.
Then I put every folder in alphabetical folders :
Sounds like a good workaround to me. I believe git uses a similar directory structure in its object store for similar reasons (ls .git/objects
).
I tried using block read/write/erase callbacks but I get a LFS_ERR_NOSPC error after creating ~100 files so I use the sector functions (driver w25qxx).
This is correct. Unfortunately the names of sectors/blocks/pages/clusters is inconsistent between different types of storage, so littlefs uses the term block for everything.
PS : Why are folders a lot quicker to find than files ?
That is curious. I think it's caused by the extra space inline files take up. Are some of your files smaller than the cache size and ~1/8th the erase block? These get stored inline in the directory to avoid using a whole block. These probably take up more space than the directory entry, meaning more directory blocks in the parent directory that need to be scanned during lookup.
Hi,
Thank you very much.
My files can go from a few bytes up to 8k. I did measure this (with buffer size comparison and before any optimizations) :
More than 150ms is still a little bit too much for low energy consumption, so I think I will switch to SPIFFS for the moment.
I made that choice by measuring each file system I know and I got this :
Maybe this will help, I also ran performance tests on read and write (each point is measured in a freshly formatted file system, poor granularity but useful for comparison) :
Appart from that, thank you for LittleFs which is easy to setup and try out.
Thanks for sharing your results. These are quite interesting. The comparison of open times across filesystems is quite surprising, I would have expected both FAT and SPIFFS to also grow linearly.
I suspect has more to do with caching than the on-disk data structure. Better caching strategies is also one area littlefs could improve on.
Out of curiosity what is the FAT cluster size? Do you know how much cache is being provided to FAT and SPIFFS?
It would be interesting to see a comparison with cache_size = block_size = 4096, though this would come with a RAM cost.
The ST implementation of FAT requires to read or write entire clusters so it is not linear. For write time, you must erase the cluster so it takes even more time. I guess ST implementation is pretty standard.
FAT can go from 512 to 4096 bytes per clusters.
About caching :
#define LOG_PAGE_SIZE 256
static u8_t spiffs_work_buf[LOG_PAGE_SIZE*2];
static u8_t spiffs_fds[32*4];
static u8_t spiffs_cache_buf[(LOG_PAGE_SIZE+32)*4];
It would be interesting to see a comparison with cache_size = block_size = 4096, though this would come with a RAM cost.
Do you mean in LittleFS ? I did measure this a few weeks ago, I did not go up to sector size because the difference was not significant enough :
Hello,
I am new to littlefs and file systems in general. I am making benchmarks of it using STM32WB55 and a 16M SPI flash.
I noticed that open time depends on the number of files in a folder : A single file in a folder can be opened in 18ms (with the best settings I found)
When my folder contains 50 files :
I can open it in ~450ms.
So I tried putting each file in a folder :
In that case, one file opens in 240ms
Then I put every folder in alphabetical folders :
Here, file opens in around 150ms.
Settings
LFS v2.5.0
Questions
Are those the standard durations ? If so, any idea of performance improvements ? If not, What am I doing wrong ?
PS : Why are folders a lot quicker to find than files ?