inveniosoftware / invenio-s3

S3 file storage support for Invenio.
https://invenio-s3.readthedocs.io
MIT License
0 stars 16 forks source link

Use checksum from storage server instead of calculating it always #20

Open egabancho opened 4 years ago

egabancho commented 4 years ago

Right now when asl for the checksum of a file we digest the file and calculate it application-side, it would be nice to return the value the storage server is giving us directly, similar to https://github.com/inveniosoftware/invenio-xrootd/blob/master/invenio_xrootd/storage.py#L60

ppanero commented 4 years ago

Hello! One quick question, this is just once it was upload, just to serve? Otherwise, what would happen in this two scenarios:

wgresshoff commented 4 years ago

I see just one problem with that approach, but perhaps I'm not aware of a possible solution or it's handled otherwise ;) When using MultiPart-Uploads checksums are calculated for every part that's uploaded. Then it's used to calculate the final checksum. Is the result really usable? The normal checksum type in S3 is md5, but I can't imagine thats correct for multipart uploads.

egabancho commented 4 years ago

@wgresshoff you are definetly right, and I don't have an answer for that ☺️

egabancho commented 4 years ago

@ppanero The checksum that it's stored to Invenio's database gets calculated at upload time, i.e. we do the content digest and the hex hash. So yes, it's just to do integrity checks afterward.

Now, if we want to verify the file integrity we could do two things, (i) ask the storage server for the checksum and compare with the one we have stored (from the upload) or (ii) calculate the checksum on our end and compare it with the one we have stored.

The first option is only doable right now for smaller files, i.e. no multipart uploads, as soon as you upload a big file you get an Etag that is the combination of the hashes of each of its parts (what @wgresshoff pointed out)

The problem with the second option is that it's time-consuming, you have to read the entire file, but it works for all small and big files. Plus if you use a service, say AWS S3, you have to pay for the extra traffic.

Perhaps "the middle way" might be the solution here, if we can get it from the server, use it, otherwise calculate it...

ppanero commented 4 years ago

Middle way seems the best trade-off, thanks for the explanations :)