littlefs-project / littlefs

A little fail-safe filesystem designed for microcontrollers
BSD 3-Clause "New" or "Revised" License
4.92k stars 774 forks source link

Unable to write to flash due to "LFS_ERR_NOSPC" which is wrong #896

Open NLLK opened 7 months ago

NLLK commented 7 months ago

I'm using v2.8.1 version of LittleFS. The device is W25Q64JV. Build with STM32 GNU compiler, C++17. Configs are:

//global context

#define CACHE_SIZE          256
#define FLASH_READ_SIZE          256
#define FLASH_PROG_SIZE     256
#define FLASH_SECTOR_SIZE   4096
#define FLASH_SECTOR_COUNT  2048
#define FLASH_SIZE (FLASH_SECTOR_SIZE * FLASH_SECTOR_COUNT)

#define LFS_CACHE_SIZE    CACHE_SIZE
uint8_t readBuffer[LFS_CACHE_SIZE];
uint8_t writeBuffer[LFS_CACHE_SIZE];
uint8_t lookBuffer[LFS_CACHE_SIZE] __attribute__((aligned(32)));

//init function
Database::Database()
{
    // block device operations
    cfg.read  = user_provided_block_device_read;
    cfg.prog  = user_provided_block_device_prog;
    cfg.erase = user_provided_block_device_erase;
    cfg.sync  = user_provided_block_device_sync;

    // block device configuration
    cfg.read_size = FLASH_READ_SIZE;
    cfg.prog_size = FLASH_PROG_SIZE;
    cfg.block_size = FLASH_SECTOR_SIZE;
    cfg.block_count = FLASH_SECTOR_COUNT;
    cfg.block_cycles = 500;
    cfg.metadata_max = 256;
    cfg.name_max = 255;
    cfg.attr_max = 512;
    cfg.file_max = LFS_FILE_MAX;

    cfg.cache_size = LFS_CACHE_SIZE;
    cfg.lookahead_size = LFS_CACHE_SIZE;

    cfg.read_buffer = readBuffer;
    cfg.prog_buffer = writeBuffer;
    cfg.lookahead_buffer = lookBuffer;

//....
}

My write function also could be useful to solve issue:

int Database::writeData(std::string fileName, void *buf, int size, int flags)
{
    lfs_file_t file;
    int status = lfs_file_open(&fs, &file, fileName.c_str(), flags);
    if (status != LFS_ERR_OK)
        return status;
    int sizeWrite = lfs_file_write(&fs, &file, buf, size);
    status = lfs_file_close(&fs, &file);
    if (status != LFS_ERR_OK)
        return status;
    if (sizeWrite != size)
        return -1;
    return sizeWrite;
}

It mounts just fine. I can read and write to flash. I can write 5 files with average file size about 130 bytes and read them. I can rewrite these files as many times as i want. Files content is JSON-strings.

But if i write another file (for example, size: 125, filename length: 26), it returns -28. I call lfs_file_open and then lfs_file_write (they dont throw any error codes). Then call lfs_file_close (which returns -28). Call stack is:

1 lfs_dir_commitattr
2 lfs_dir_commit_commit
3 lfs_dir_traverse
4 lfs_dir_relocatingcommit
5 lfs_dir_orphaningcommit
6 lfs_dir_commit
7 lfs_file_rawsync
8 lfs_file_rawclose
9 lfs_file_close

To be exact, lfs.c:1575 returns LFS_ERR_NOSPC while checking condition "commit->off + dsize > commit->end".

commit->off = 256; commit->end = 248; dsize = 12;

And for additional context, if i repeat this operation (try to do write operation again with the same file) it will ASSERT on lfs.c:262 (function "", string: LFS_ASSERT(pcache->block == LFS_BLOCK_NULL)), where pcache->block = 955

When i read flash into hex-file i see that there are much more data that i supposed to write and they are repeat each other no matter if i rewrite file or write new. Image viewer show something like this:

image

So the main question is: how do i solve such issue?

PS: Also write operation could throw "LFS_ERR_NAMETOOLONG" with such filename "/db/c_l/c_10_683642954602", which is not true since the length is only 27.

NLLK commented 7 months ago

I managed to make it work downgrading version of littlefs to v2.0.5. I will probably lose some functionality but i actually dont really care.

Issue is still open but not that urgent

geky commented 7 months ago

Hi @NLLK, I don't have a good answer, but I wanted to add some info that may be useful.

LittleFS is not well tested when the size of an individual file's metadata exceeds what can fit in a metadata log. I noticed you set metadata_max = 256 and unfortunately this might be too small to fit a single file, which can break LittleFS in surprising ways.

To make matters worse, this only matters when metadata compaction occurs, meaning a metadata limit error can be unrelated to the file currently being written.

The good news is that this should be improving as a part of some larger work, with LittleFS tracking the amount of metadata associated with files and error if limits are exceeded earlier rather than later. But this is work-in-progress.

To be exact, lfs.c:1575 returns LFS_ERR_NOSPC ...

LittleFS uses LFS_ERR_NOSPC to indicate both "out of blocks" and "out of space in a metadata log". From user feedback it's become quite clear that this is very confusing, so LFS_ERR_RANGE is being added in the future to indicate "out of space in a metadata log" with a different error code.

PS: Also write operation could throw "LFS_ERR_NAMETOOLONG" ...

Since long names are a common cause for running out of metadata space, LittleFS reports "out of space in a metadata log" as LFS_ERR_NAMETOOLONG if file creation was related. But this was a mistake and will be changed to LFS_ERR_RANGE.

And for additional context, if i repeat this operation (try to do write operation again with the same file) it will ASSERT on lfs.c:262 (function "", string: LFS_ASSERT(pcache->block == LFS_BLOCK_NULL)), where pcache->block = 955

This assert catches when pcaches have not been flushed. Which makes a bit of sense since LittleFS errored abruptly. Unfortunately LittleFS is not able to recover from errors very well without unmount+mount. This is another area of work...

I managed to make it work downgrading version of littlefs to v2.0.5. I will probably lose some functionality but i actually dont really care.

Is this the most recent version that works? I'd be very curious if v2.5.1 is what broke things, since it's a small changeset.

I'm guessing more likely the root cause is the introduction of FCRCs (erase-state checksums) in v2.6.0. This added an additional checksum to every commit, which isn't that significant, but may be enough to bump metadata over some limit or trigger some new corner case.

NLLK commented 7 months ago

Thank you for the reply. It has a good explanation of things which i appreciate. I will try things you suggested and will comment this issue again if something will help