Closed ochaton closed 1 year ago
thank you @ochaton , very impressive!
I can test on local a gdrive storage, but I'd need someone else to test for s3 and storj
maybe for s3 I can try with minio
cc @stefanbenten
My general setup is transfer.sh in front of S3 storage. But it would be better if you test patch yourself :)
I can test both S3 and Storj early next week.
@stefanbenten @aspacca Any updates?
@ochaton do you mind if i push on your branch to check to test range in gdrive?
@ochaton please, check https://github.com/ochaton/transfer.sh/pull/1 :)
@stefanbenten , @ochaton where you able to test storj and s3 with my commit added?
I will think about bumping google.golang.org/api
version than required to drop 1.15 go build support
see my reasons at https://github.com/dutchcoders/transfer.sh/issues/505#issuecomment-1233724433 for avoiding that if possible
This patch does not fix the problem with downloads limiting :( this is really hard. IMO, it is better to not allow range downloads for limited content.
do you mean in terms of respecting Max-Download
?
not sure it will make a huge difference in the end: we could prevent range to be fulfilled if Max-Download
is set, this way we will prevent partial download in the case the number of "chunked" requests will me higher than the Max-Download
value. but there's nothing we can do in hitting the Max-Download
limit even if we ignore the range: I would not avoid to decrease the counter, since it's an open door to abuse of the fact that the value was set
tested with s3 on localhost
@stefanbenten did you have the chance to test with storj?
This patch does not fix the problem with downloads limiting :( this is really hard. IMO, it is better to not allow range downloads for limited content.
do you mean in terms of respecting
Max-Download
?not sure it will make a huge difference in the end: we could prevent range to be fulfilled if
Max-Download
is set, this way we will prevent partial download in the case the number of "chunked" requests will me higher than theMax-Download
value. but there's nothing we can do in hitting theMax-Download
limit even if we ignore the range: I would not avoid to decrease the counter, since it's an open door to abuse of the fact that the value was set
I didn't get the idea of Max-Downloads. Does anyone use it to limit "only-once" downloads or it can be used in different way? Maybe you have some production statistics?
Nevertheless it seems that ignoring Range header for objects with defined 'Max-Downloads' in metadata file is the only option to keep compatibility.
I've made an attempt to support downloads with passed Range header and propagate Range to underlying storage (except GDrive, I didn't find any clue in the gdrive doc how to pass Range header to it).
Only range with single slice is supported, and I didn't implement suffix and prefix ranges. If Range is not satisfied then header is ignored (it seems legit by RFC, but maybe it is better to respond with HTTP 416).
This patch does not fix the problem with downloads limiting :( this is really hard. IMO, it is better to not allow range downloads for limited content.
realtes to https://github.com/dutchcoders/transfer.sh/issues/490