Closed awaisas closed 5 days ago
I don't think there's a reasonable way to manage knowledge of all of the previous uploads without potentially losing data. The same filename will be reused for a variety of reasons and the original path can't be retained.
In practice, I separate my upload and my long-term storage. I upload from a directory and then either delete the input or sync + backuplocal + delete the input. In either case, there's a separation of starting an upload and finishing an upload.
Thank you for coming back on this. I understand it can be challenging to have a grip on that what is on local storage and on the server. The upload folder can be a good idea, but the upload job always fails at some time point. Is there a way I could do the following:
1) Create an upload folder and add all the media in order to be ready for the upload 2) Automatically delete each input file when it has successfully uploaded 3) When it fails: Run cleanup and then restart the process
If the upload fails, you can run it again with no arguments to resume. You only need to supply arguments when you're adding new files.
e.g., I don't always clean up my upload directory as soon as it's done, but I create a directory for each sd card shooting session and enqueue that for uploading. So I might gopro upload /some/stuff/20230212
and if it fails, gopro upload
will resume.
It'd certainly be possible to delete when done. The nice thing about not doing that is that I run backuplocal
when I've completed a sync. backuplocal
with --refdir
will hard link (or copy) the uploaded media. Either one of these could manage the deletion (i.e., rename
instead of link
).
Thank you for your reply. I believe the trick is here to run upload without arguments until everything is finished. Will give it a try.
Yes, but it will still reupload anything you tell it to.
There's one small thing where it won't reupload anything that's currently known pending on that particular instance, but that's pretty narrow. I'm sure we can have better UX with some kind of "drop box" type mechanism that doesn't upload the same file more than once anywhere on an instance. It's just not as simple "just the filename" and renaming a complete path might create a challenge. I assume one would also want the ability to reupload a file (I do this sometimes when GoPro cloud breaks).
I have been let it running a couple of hours now. It occasionally crashes, but overall it is going fine. Mostly it is timeouts. I then resume the remaining uploads with gopro upload (without args). From time to time I have to go into the database remove files it has problems reading (suspecting corrupt files). A couple of features that would be great if you were to update the library, would be:
1) In case of error on particular media, stop it s resume but let the others continue. 2) Some kind of retries before it gives up on the media. 3) State which media it struggling with and remove it directly from the CLI. Right now, I am finding the file by the media_id and then running a "DELETE FROM" command directly on the DB.
Anyway as it is going now it is manageable. Again, thank you for your help and great software to ease my life on this.
Interesting… I don't have those particular problems. There are some smaller retries but I usually just have timeout on the cloud service. When media fails for me, it fails processing on GoPro's side.
Can you expand a bit on how you're ending up with unreadable media?
I am using the software through docker. As I have recently subscribed to the plus I am trying to upload my existing library I have stored locally. When I upload the files it works fine until there is an error (typically timeout). Now if I again run the upload it will restart the whole process by creating new upload IDs and then start the process all over again including the files that were successfully uploaded instead of continuing the existing remaining unfinished uploads. I assume it can be due to me running through Docker.
Using the following command: docker run --interactive --volume $PWD:/usr/lib/gopro --volume "path/to/the/media:/data" --rm dustin/gopro:master gopro upload /data -v
Could it be other files that should be kept persistent for it to keep the current upload status?