Closed ppmathis closed 8 years ago
Resumable uploads requires the size of the file up-front, thats why its not used when uploading via stdin, as I have no way of knowing the size.
I see that pv
takes the filesize in as an argument, that could be a solution for gdrive as well. What do you think?
I already expected that when I took a glance at the sourcecode, but I wasn't sure. I think it would be a great idea to do it similar like pv
- if no size is specified, upload it non-resumable (and maybe print a warning/hint?), if a size got specified, use the resumable way of doing it.
Agreed
I'll make a push request for that feature soon, stay tuned.
EDIT: This might actually be a lot more difficult than I first thought. Your current implementation of UploadStdin takes a io.ReadCloser, but the ResumableMedia API requires the interface io.ReaderAt. I've tried changing it to *input os.File** and it compiles atleast, but ofcourse the execution will not work properly:
read /dev/stdin: illegal seek
As far as I can tell, your google-api-go-client repository does not do the actual fileuploads, it only passes those arguments to another library from Google, which indeed tries to seek. And pipes are non-seekable, so that call can only fail.
One possibility would be to wrap something around os.Stdin which just discards input when seeking forward and throws an error when trying to seek backwards, as that's ofcourse not possible. Sounds like a really ugly hack though, any other ideas?
Just a short ping @prasmussen, as I'm not sure if you'll get a notification about my EDIT or not. I'll possibly try later on to go that quick'n'dirty way - it would work for me but ofcourse isn't a proper solution.
Oh, thats too bad. I haven't followed the code to see what happens to the Reader, so i don't really have any other ideas at the moment. Im guessing seeking is required when resuming an upload -- I'm not sure why you get a seek when doing a normal upload though. If you get the wrapper to work and it seems reliable, I will accept the pull request and I could mark it as an experimental feature or something.
Short update: I've successfully uploaded now multiple big files (several GB) during the last three days and some of them took over 10 hours. My wrapper class works fine - I just have to add some additional code so that mime type detection works, I've disabled that currently and always use "application/octet-stream". Otherwise than that, the solution works way better than I've expected - PR coming soon.
Sounds promising :)
ResumableMedia has been deprecated in google-api-go-client and the problem with long uploads failing with Media seems to be fixed according to my tests with gdrive 2.
$ cat video.mp4 | pv -q -L 50k | gdrive upload - video.mp4
Uploading video.mp4
Uploaded 0B3X9GlR6EmbnSFRwSTNMZkY3Nzg at 51.2 KB/s, total 1.8 GB
pv -q -L 50k 21.88s user 18.02s system 0% cpu 10:00:43.71 total
Basically, I'm facing the same issue as in #20. As I can clearly see within the last commit, the bugfix doesn't apply to stdin and only helps when directly specifying large files. As I need both a progress bar and some ratelimiting, I've created a bashscript which recursively uploads whole directories including all files. Also, I've implemented MD5 checksumming, so that those files won't get reuploaded.
The way I'm calling this application is like that:
cat largefile | pv -pteraT -L <ratelimit> -s <totalfilesize> | ./gdrive upload -s -t <filename> -p <folderid>
. After quiet a few hours of uploading, it failed with "Error 401: Invalid credentials" as listed above. Could that fix also get implemented for uploading via stdin?