Open hmeine opened 9 years ago
Yes fs.wrapfs
would be perfect. The implementation really depends on what type of filesystem you're trying to wrap and your use case though.
There are classes for caching single files and filesystem metadata, but not complete filesystem: https://github.com/PyFilesystem/pyfilesystem/blob/master/fs/remote.py
Yes, I think CacheFS
comes close, but I would like to have an additional cache of files that were read. So, one would configure a local cache directory and maximum cache size (after which the oldest files are purged), and all files would first be copied there and reused instead of re-downloaded. The metadata would obviously be solved by CacheFS[Mixin]
already.
Hi,
I'm working on a WrapFS subclass implementing the LRU caching scheme (wikipedia entry). On top of the wrapped FS, one provides another FS used for caching and the max size of the cache. The consistency is checked based on md5, so it's compatible with s3's etag. Would you consider a pull request when I'm done?
M.
I think you could close this issue, unless you have some code to submit. You made a point.
IMHO this is a perfectly valid suggestion / request, so there's no point closing it.
I agree. Some maintainers close less prospective issues when there is no active work to discuss.
For networking-based filesystems, one is often interested in some kind of local cache. As far as I can see, that would be another good use case for
fs.wrapfs
, right?