Open rontrompert opened 6 years ago
Files scan doesn't make sense when using primary object store: the object store only contains a flat list of file ids as keys and file contents as objects, so there is nothing useful to scan.
Please note that you cannot use primary object store mode with ownCloud if you uploaded files there using the usual S3/Swift uploaders that would use absolute paths as keys.
cc @DeepDiver1975
Using a file system we have seen in practice that situations can occur that there at some point an inconsistencies can develop between database and storage system for whatever reason. This is understandable because writing stuff to a database and write the stuff to storage is not an atomic action and at each stage things can fail.
On a normal file system the file scan can help here to bring things into sync. But there is no equivalent for an object store.
Another thing is that in a disaster recovery scenario where there are separate backups of database and storage, the database is by definition out of sync with the storage.
I'll reopen this for discussion but I don't think it is possible to rescan a primary object store as the scanner cannot find out what to put into oc_filecache, beacause all objects inside the object store are mapped by file id and file contents. There isn't any other information there. So by scanning, the scanner cannot know what to do with fileid + contents: no way to find in what path this object is supposed to be.
Now if we do consider recovery scenarios where the backups aren't from the same moment, it would require storing more metadata into the object store like file path, etc.
@DeepDiver1975 @butonic
I see additional problem in it, during scan external S3-like storage error happens on first subdirectory, but verbose output show this folder as "File"
UPD: S3 don't require to create "folders" (also s3fs not compatible with "folders" at all, it's work only with files-with-full-path-inside). When "folders" (as empty objects) is created files:scan have no errors in this case