Open atanisoft opened 3 years ago
impressive! thank you for such an example
I am having the same issue with the log use case, one task writes to the log and another should be able to read and present it, this means that the log file is opened for append in one task and open for read in the other task to read its content. I cannot close the writer and need the reader to be able to read.
That's a bit of necromancy going on here. To sum up what's written above:
In the "write to log and read it" case, that seems to bother here, the easiest solution is to protect the log access with a mutex so that either write or read can happen at the same time. In the read thread, take the mutex, open the file, read it, close it, release the mutex. In the write thread, take the mutex upon writing and syncing, then release it.
Another solution that's working a bit better is to delegate the write and read operation to only one thread and post an event to read or write to its main event loop. This thread will react upon a read event by rewinding the read position to the buffer size to capture and read it, post the buffer to the task that has requested the read. The write event will fetch the current log buffer and write it to the file. Only a single file descriptor is required in that case.
Thank you for explaining this, yes, special custom workarounds may be applied.
The "atomic" warranty is the unexpected behavior in file system based set, these limitation (or differences from standard filesystem) need to be communicated so that people understand they need to create these workarounds, for example now I understand that a log file must be periodically closed in order for it to survive power loss, as the entire log will be lost.
@alonbl A possible (simple) solution is to call flush(fd)
from the task that handles writing, this should trigger the per-fd caching to be invalidated and changes persisted.
Hey guys, thanks for posting work arounds here. However, if you want this to be properly solved, maybe voice your thoughts to geky at https://github.com/littlefs-project/littlefs/pull/513
@alonbl A possible (simple) solution is to call
flush(fd)
from the task that handles writing, this should trigger the per-fd caching to be invalidated and changes persisted.
Hi @atanisoft,
I do not see flush(fd) available
esp_littlefs_example.c:65:5: error: implicit declaration of function 'flush'; did you mean 'fflush'? [-Werror=implicit-function-declaration]
65 | flush(fd);
I also expect that if this works, then the file content is committed and should be also visible to other open()
attempts, which is in conflict to what @X-Ryl669 wrote in his summary.
Have you checked your solution and found it working? can you please share a complete example?
Thanks,
You need to close the read's fd (if it's in its own thread) while you hold the mutex because the metadata it refers to will be wrong as soon as the write fd modifies the file. Once you've closed it, the only way to read is to reopen so it forces fetching the updated metadata.
On the write thread, you don't have to close it, you can sync it (with fflush) but this must be done with the mutex taken (so the sequence is hold the mutex, write data, sync, release mutex).
Have you checked your solution and found it working? can you please share a complete example?
Yes, I've used it and it does work. I use the VFS approach to open(path, mode)
and later call fsync(fd)
to force flush the writes to storage.
In my original use case there were two file handles being created for the same underlying file, this was a bug in the library that I was using as it should be using a single file handle. There were different behaviors observed between the two file handles, much like @X-Ryl669 indicates above. After fixing the library to uniformly use a single file handle and implementing flushing to storage the issues were largely gone. The critical piece that needs to be handle is ensure you have locking to prevent concurrent read / write operations if the latest written data is interesting to the reader and either reopen the read file handle or use a single file handle across both read and write (which increases the need for locking).
I've encountered an odd case where the caching in LittleFS seems to be causing an issue that I'm not seeing on SD or SPIFFS.
Using a code pattern similar to below the second file handle will not see the data from the first file handle update:
Using two handles for the same file doesn't make a ton of sense in most cases, especially within the same function, but this pattern works on other FS types.
Any ideas or suggestions on config tweaks to reduce the chances of the two file handles being out of sync?