littlefs-project / littlefs

A little fail-safe filesystem designed for microcontrollers
BSD 3-Clause "New" or "Revised" License
5.22k stars 799 forks source link

Can you consider optimizing the relocation process for superblocks? #376

Open Johnxjj opened 4 years ago

Johnxjj commented 4 years ago

The current problem is that after the super block is used up, a process of relocating the super block will be performed, and this process will take a lot of time to do, which will cause the phenomenon of freeze. I have read the questions raised by many people before: they all reflect the occasional relatively large processing time during the operation of each API. I specifically located because of the superblock relocation process, this process will re-read each entry, and several times. The smaller the LFS_READ_SIZE, the more times. One solution is to request a piece of memory, read the superblock data at one time, and analyze it, instead of reading the entries multiple times. For users, stable time is more important than memory.

e107steved commented 4 years ago

Stable time is not always more important than memory; the tradeoff will depend on a lot of application-specific things. If this feature were to be added, it would need to be optional. A single chip micro on its own is often RAM-poor.

Maybe for the future some flags to allow selection of tradeoffs to be applied - code size/speed/RAM usage etc

Johnxjj commented 4 years ago

@e107steved You are right! Different application scenarios have different needs.

geky commented 4 years ago

Ah yeah, this issue is caused by a naive check for whether or not we have enough space to expand the superblock.

Explanation:

So we check the size of the filesystem before we expand the superblock. The problem is that finding the size of the filesystem right now involves scanning the entire filesystem: https://github.com/ARMmbed/littlefs/blob/master/lfs.c#L1491

That being said, this process should only happen 1 maybe 2 times.

It may be possible to fix this in the short-term by storing the number of blocks free somewhere. Though would mean we'd end up doing the same filesystem scan, but at mount time? I'm not sure there's an easy fix.

My current plan is to fix this as a part of allocator improvements necessary to fix https://github.com/ARMmbed/littlefs/issues/75, which has related issues.

Johnxjj commented 4 years ago

@geky I have observed that this time is spent on line 1441-1483 and line 1528-1658 of lfs.c. During this time, LFS needs to go to the flash to read the data of the super block, a total of more than 1,200, and each time 1.5-2ms, so this total time is a lot. I think there is something wrong with this process, because all the contents of the same super block are read, why not read it in advance and then fetch it from RAM each time, which will greatly increase the speed. Just like in the process of reading a file, the entire block of data is first read into RAM, and then each time you can read the data directly from RAM.

Johnxjj commented 4 years ago

That is, my code did not run to block_cycles == 100. Here is my configuration: g_cfg.read = lfs_block_read;
g_cfg.prog = lfs_block_prog;
g_cfg.erase = lfs_block_erase;
g_cfg.sync = lfs_block_sync;

/* block device configuration */
g_cfg.read_size = LFS_READ_SIZE;                        // 128
g_cfg.prog_size = LFS_PROG_SIZE;                        // 128
g_cfg.block_size = LFS_FLASH_SECTOR_SIZE;                    // 4096
g_cfg.block_count = ullfs_flash_sector_count;                // 256
g_cfg.block_cycles = LFS_BLOCK_CYCLES;                  // 100
g_cfg.cache_size = LFS_CACHE_SIZE;                    // 4096
g_cfg.lookahead_size = LFS_LOOKAHEAD_SIZE;             // 256/8

g_cfg.read_buffer = uclfs_read_buf;                    // uclfs_read_buf[4096]
g_cfg.prog_buffer = uclfs_prog_buf;                    //uclfs_prog_buf[4096]
g_cfg.lookahead_buffer = ullfs_lookahead_buf;         // ullfs_lookahead_buf[32]
g_file_cfg.buffer = uclfs_file_buf;                    // uclfs_file_buf[4096]
Johnxjj commented 4 years ago

hi @geky , I found that my question may be related to #203