littlefs-project / littlefs

A little fail-safe filesystem designed for microcontrollers
BSD 3-Clause "New" or "Revised" License
4.9k stars 771 forks source link

Is it possible to store multiple file in one erase block? #927

Open ArcheyChen opened 5 months ago

ArcheyChen commented 5 months ago

Hello, I'm trying to find an embedding file system for my GBA cart. The cart can erase at 256KB block and write in 1KB. But my file is normally in 64KB, so 3/4 space will lost if we can only store one file per erase block.

is it possible to improve that?

geky commented 5 months ago

Yes, with heavy caveats. So probably no.

There's work underway to improve this, but I realize that's not super useful right now.

Right now, littlefs has a concept of inline-files, which can share an erase block. But inline-files have a requirement of being RAM backed. This means you need 64KB of RAM to hold the file when open. To make matters worse, littlefs has one shared cache_size option of all buffers, so you really need 3*64=192KiB of RAM to mount a filesystem that will inline a 64KiB file.

I guess the good news is that there is work underway to remove the RAM requirement from inline-files. This is more complicated than it probably appears, since things like naively writing to a non-RAM-backed inline file can quickly end up $O(n^2)$. But things are working currently, just some more time is needed before the results can become stable.

ArcheyChen commented 5 months ago

Yes, with heavy caveats. So probably no.

There's work underway to improve this, but I realize that's not super useful right now.

Right now, littlefs has a concept of inline-files, which can share an erase block. But inline-files have a requirement of being RAM backed. This means you need 64KB of RAM to hold the file when open. To make matters worse, littlefs has one shared cache_size option of all buffers, so you really need 3*64=192KiB of RAM to mount a filesystem that will inline a 64KiB file.

I guess the good news is that there is work underway to remove the RAM requirement from inline-files. This is more complicated than it probably appears, since things like naively writing to a non-RAM-backed inline file can quickly end up O(n2). But things are working currently, just some more time is needed before the results can become stable.

I'm not an expert on File System, but like can we make a little change to the logic to do that? Like, what we are doing now is to 'alloc' a block once a time, but if we change it to alloc multiple blocks at one time?

Like, in my case, I erase at 256KB and writes at 1KB, we can regard it as alloc 256 blocks, and use 1 block a time?

===================

since that it wont be support soon, so I have to use one file to store multiple data right now. A simple question, any easy way to create a 256KB empty file? so I can read/write to certain offset in the file. Like my data is 64KB + few bytes of meta data. Than I can store it on offset 0 or offset 128KB.

geky commented 5 months ago

I'm not an expert on File System, but like can we make a little change to the logic to do that? Like, what we are doing now is to 'alloc' a block once a time, but if we change it to alloc multiple blocks at one time?

It's an interesting idea. The main concern would be fragmentation. littlefs wouldn't be able to reclaim blocks until all files in a block are gone. This means if you had, say, 1 64KiB file you write once, and 1 64KiB file you rewrite continuously, the first file would quickly take up effectively 256KiB worth of space.

This isn't an issue for static-wear-leveling filesystems, such as SPIFFS, since they periodically evict stale files. In theory littlefs could be made to have limited static-wear-leveling, but it would be quite a bit more work than improving inline files.

A simple question, any easy way to create a 256KB empty file? so I can read/write to certain offset in the file. Like my data is 64KB + few bytes of meta data. Than I can store it on offset 0 or offset 128KB.

littlefs is a copy-on-write filesystem, so reserving memory doesn't really make sense like it does in other filesystems. Any writes to the file result in a new allocation. The upside of this is power-loss resilience.

But you can tell littlefs to pretend a file is that big:

lfs_file_t file;
int err = lfs_file_open(&lfs, &file, "my-file", LFS_O_WRONLY | LFS_O_CREAT);
if (err) {
    return err;
}

// resize file to 256KiB
err = lfs_file_truncate(&lfs, &file, 256*1024);
if (err) {
    return err;
}

// write to file at offset
lfs_ssize_t pos = lfs_file_seek(&lfs, &file, n*64*1024, LFS_SEEK_SET);
if (pos < 0) {
    return pos;
}

lfs_ssize_t d = lfs_file_write(&lfs, &file,
        "blablablabla", strlen("blablablabla"));
if (d < 0) {
    return d;
}

// data does not persist on disk until sync or close
err = lfs_file_close(&lfs, &file);
if (err) {
    return err;
}

Any unwritten data in the file will be read as zero. littlefs tries to match POSIX in this regard.


A word of warning though, littlefs has some real scalability issues with these large block sizes. This is also being worked on...