Open parviste-fortum opened 10 months ago
Thanks for the bug report @parviste. Another workaround is to use a different caching mode that more aggressively clears the cache.
I think this is a good bug, and we don't currently update the cache on any of the cloud file move/delete scenarios. It seems worth a fix here to remove files from the cache for the following methods:
unlink
rename
replace
rmdir
rmtree
It's probably true that the cache should be cleared after moving/removing files, but at the same time, it also doesn't feel fully correct to necessarily write the cache files using the same mode as the files written to S3.
In particular, it would make sense to me to either:
I can't really see a situation where it makes sense to write the cache files using the exact mode specified for the remote file, sometimes "x" sometimes "w".
It keeps things simpler to not have special cases for different types of writes, so I'd rather have the cache state well managed than make assumptions with the writing mode. For example, if your original example was done with a
instead of x
and actually added some text to the file, the output would be wrong unless we both mirrored the state of the remote and opened the cache file in the same mode.
We currently do the following on writing to the cloud: (1) check if file is on the remote before writing with x
and (2) refresh the cache before doing any writing,
I think the other change we want here is in _refresh_cache
to delete the cache file if the remote file does not exist.
The following code
does not work as expected. I would expect it to create a file, remove it, and then create it again. Instead, I get
This seems to happen because the file is kept in cache, even when it was removed from S3. A simple solution would probably to use the "w" mode, even when "x" was supplied.