littlefs-project / littlefs

A little fail-safe filesystem designed for microcontrollers
BSD 3-Clause "New" or "Revised" License
5.24k stars 802 forks source link

Issues with writes larger than the cache size #591

Open space-individual opened 3 years ago

space-individual commented 3 years ago

Hi, I am trying to integrate littlefs on a flash controller and am having an issue where if I try to write a file larger than the cache size, then I get a bad block error when I try to close the file and the program seems to get stuck on finding bad blocks.
I am guessing what happens is that when the file gets closed, the remaining bits of the file in the cache are written to the disk. However, in the case of my instance of littlefs, I get stuck on the following line in lfs.c lfs_file_flush: LFS_DEBUG("Bad block at 0x%"PRIx32, file->block);

C:/lfs_project/app/src/lfs.c:2792:debug: Bad block at 0x11be
C:/lfs_project/app/src/lfs.c:2713:debug: Bad block at 0x11bf
C:/lfs_project/app/src/lfs.c:2713:debug: Bad block at 0x11c0
C:/lfs_project/app/src/lfs.c:2792:debug: Bad block at 0x11c1
C:/lfs_project/app/src/lfs.c:2713:debug: Bad block at 0x11c2
C:/lfs_project/app/src/lfs.c:2713:debug: Bad block at 0x11c3
C:/lfs_project/app/src/lfs.c:2713:debug: Bad block at 0x11c4
C:/lfs_project/app/src/lfs.c:2713:debug: Bad block at 0x11c5
C:/lfs_project/app/src/lfs.c:2713:debug: Bad block at 0x11c6
C:/lfs_project/app/src/lfs.c:2713:debug: Bad block at 0x11c7
C:/lfs_project/app/src/lfs.c:2713:debug: Bad block at 0x11c8 C:/lfs_project/app/src/lfs.c:2713:debug: Bad block at 0x11c9

This error is spawned from a bad error code in the function lfs_bd_flush on line 2783. Note this is just a small snippet of the error printouts, it seems to increment a single block at a time and just continues forever.
I have checked my read, write, and erase functions, and all of them are returning a successful status message of 0.

I have attached my configuration below.

const struct lfs_config cfg = { // block device operations .read = lfs_flash_read, .prog = lfs_flash_write, .erase = lfs_flash_erase, .sync = lfs_flash_sync, .read_size = 8192, .prog_size = 8192, .block_size = 1048576 .block_count = 8192, .block_cycles = 500, .lookahead_size = 32768 .cache_size = 32768, .read_buffer = (void) READ_BUF_ADDR, .prog_buffer = (void) WRITE_BUF_ADDR, .lookahead_buffer = (void*) LOOK_BUF_ADDR, };

Through my testing, I do not seem to have any issues with files smaller than the buffer size. Right now I have the buffer size set to 4 pages (4 * 8192 bytes per page = 32,768 bytes).

Also, on another note, I noticed that littlefs seems to read pages that are in unerased blocks, causing errors on the flash controller that I am using when it tries to correct the unerased page. Is there an easy way to force littlefs to erase a block before it tries to read from it when it is allocating new space? I actually submitted a question about this a couple of months ago but was able to get around it by disabling error checking in certain scenarios, however the problem seems to have resurfaced when I am writing large files.

Let me know if any more info is necessary, I can provide it.
Any help would be greatly appreciated.

tassociates commented 3 years ago

I am noticing the same behavior - If a file is written beyond the buffer its goes into a spin of bad blocks, this is even with Appending a file

space-individual commented 3 years ago

Hey tassociates, I left a comment on your post regarding a CRC issue. Were you able to solve this particular issue? I am still running into this issue.

tassociates commented 3 years ago

We have not, and it is pretty consistent - There is some logic that is incorrect in lfs_file_flush when a file size exceeds the block_size. We can consistently generate the error by creating a file that would logically fill an entire block (64 pages) of space. As soon as we add 1 more page worth of data, we get the endless loop of Bad Blocks . We even tried to create multiple files up to 64 pages in logical size and then add 1 extra page worth and it breaks.

What does not make sense is that LFS is not writing a continuous block of data it is as would be expected spreading out the file across different pages and blocks

I can verify that none of the blocks are bad, and that this only occurs when we hit the block_size we specified which is 4096 bytes x64 pages worth of data.

We also initiated a full device erase when we start the device so all our write counters on the device are zeroed out.

nerozero commented 2 years ago

confirmation from my side too, lfs tries to program blocks larger then specified in prog_size

muratdemirtas commented 2 years ago

any update on this? i have faced same issue.

GiulioDallaVecchia commented 1 year ago

any update? i have faced same issue too

GiulioDallaVecchia commented 1 year ago

Hi @space-individual ,

if you set cache_size equals to block_size (cache_size = block_size =1048576) do you have still the issue?

BenjamimKrug commented 1 year ago

Hi @space-individual ,

if you set cache_size equals to block_size (cache_size = block_size =1048576) do you have still the issue?

Hi, I'm having the same problem as you guys, I tried doing this but had no success. Did this solve it for you, @GiulioDallaVecchia?

Arturex0 commented 1 month ago

@space-individual Were you able to resolve the issue?