littlefs-project / littlefs

A little fail-safe filesystem designed for microcontrollers
BSD 3-Clause "New" or "Revised" License
5.22k stars 799 forks source link

lfs_file_close takes too long #97

Open jcalvarez2 opened 6 years ago

jcalvarez2 commented 6 years ago

Hi,

I have the following scenario: A file size 39KB is created, and filled with zeroes. Then write operations are performed to add/remove fixed size (44bytes) records to it. The logic looks like this:

1) First time a file is created, size 39KB, and filled with zeroes. This takes ~7s

2) A record is added. The code performs the following: a) Open the file b) Read 44 bytes at the time, to find an "empty" slot (i.e all zeroes) c) Add the 44 record if an empty slot was found. The first time the record is written to the beginning of the file. d) Close the file: This operation takes 11s (debug traces disabled)

The same happens when I add another record. It is quite stable, lfs_file_close always takes around 11s. Is it something wrong with my use case? Or is my flash too slow (8MHz SPI clock)

Thank you,

Jose

apmorton commented 6 years ago

Filling your file with 0xFF will be better for performance.

Because you fill with 0's it causes every 44 byte write to relocate part of the file to a new block.

Its just inherent in the way flash works - you can flip a 1 -> 0 with a program, but flipping a 0 -> 1 requires erasing the entire block (usually 4kb or larger).

Additionally, rewriting a small portion at the beginning of a large file that requires relocating the block (flipping a 0 -> 1) forces LFS to relocate the rest of the file as well. Basically your 2b step will end up writing approximately 39KB-lfs_file_tell() bytes over SPI.

You would be much better off not pre-allocating the file size. Just create an empty file, and every time you want to add an entry open the file, seek to the end, and write 44 bytes. If you want to enforce an upper limit on the file size so it doesn't grow infinitely you could check the size before appending.

8MHz SPI clock probably doesn't help - but ultimately the biggest factor is how many erase operations you are causing to be done on the flash chip.

jcalvarez2 commented 6 years ago

Hi @apmorton

Thank you for your comments, yes, you are right, initialising the file to 00 was dumb, I need to switch my brain to "flash" mentality :-)

But this is not going to solve my problem, because in a generic case, where I have plenty of records in the file, and need to update one in the beginning, there will be always many 0 ->1 transitions that may force the file to be relocated.

I'm thinking now using multiple files instead, each up to 4KB, which would make the worst case relocating 1 block.

jcalvarez2 commented 6 years ago

Using files of 4KB makes the operations much faster, so that's what I'm using now. Closing the query, thank you for your help!

geky commented 6 years ago

Glad you were able to figure out a solution.

The slow down isn't really because of 1->0 transitions. littlefs is very conservative about what it writes to and doesn't trust blocks even if it looks erased (some types of flash don't support rewriting 1s).

It's more likely because littlefs's COW structure is limited. It supports efficient appends O(1), but as @apmorton mentions, random writes require rewriting the rest of the file O(n).

Using files == block size should be a good solution.