Closed roeslpa closed 8 years ago
Because we do not want to count the meta files into the quota for users. To differentiate between the file types on S3, we have to store them in different "folders".
Also, blocks never get updated, metafiles do. Metafiles will always have a more of less fixed size, which is much smaller than the eventual block size for normal file blocks.
Because we do not want to count the meta files into the quota for users. To differentiate between the file types on S3, we have to store them in different "folders".
Why? Folders cause quota on every file system..
Also, blocks never get updated, metafiles do. Metafiles will always have a more of less fixed size, which is much smaller than the eventual block size for normal file blocks.
Yes but we could combine meta files to a large enough meta file to keep upload and download size constant by allowing the request of parts of files (one difficult option). That is no protection against a curious server but against every user with read access.
If we do not care about leaking the number of files and folders this issue is obsolete and can be closed.
We cannot properly detect and count a file update on S3. We can only properly detect a deletion and an upload. Metadata files get updated very often, file blocks never get updated.
It is more of an S3 limitation.
Okay, I see. Then lets leave it at that..
I cannot find a reason why meta files and files are separated into two different directories. If we plan (after beta) to store files with a fix block size this would hide the number and relation of files/meta files.