rfgamaral / docker-gphotos-uploader

🐳 Mass upload media folders to your Google Photos account with this Docker image.
MIT License
58 stars 13 forks source link

Stops after done each time? #15

Open avidrissman opened 4 years ago

avidrissman commented 4 years ago

I run this in Docker in my Synology. It used to run without stopping, which was nice, as I prefer that it stay running so that I can just dump photos onto my NAS and have them show up on Google Photos. I upgraded a while ago for the wildcard include/exclude and login fixes, but since then I’ve been getting the message:

Docker container gphotos-uploader stopped unexpectedly

And indeed, I noticed when watching the log during uploading is that what seems to be happening after it completes a run is:

Upload completed: file=/photos/xxxxx File uploaded successfully: file=/photos/xxxxx 2020/02/10 10:48:52 Removing file's upload URL from DB: /photos/xxxxx 2020/02/10 10:48:52 all uploads done 2020/02/10 10:48:52 all deletions done [cmd] run exited 0 [cont-finish.d] executing container finish scripts... [cont-finish.d] done. [s6-finish] waiting for services. [s6-finish] sending all processes the TERM signal. [s6-finish] sending all processes the KILL signal and exiting.

It seems to shut itself down after every successful run.

Can it not do that? Thanks.

rfgamaral commented 4 years ago

Can you please clarify something for me?

If you setup the container from scratch as explained here and then authenticate as explained here, does it run once (it should) and then the container is stopped or does it keep running?

avidrissman commented 4 years ago

I do this in Synology’s GUI, so I don’t know how the setup maps.

What happened is that I set up the old version, and it used to run forever. Then I pulled the new version of your container, did a wipe of the container (while keeping the mapped config file) and that worked great but had the auth issue. So when you fixed that, I did a container wipe, and then the login worked, but it seems to be shutting down the container every time.

rfgamaral commented 4 years ago

I do this in Synology’s GUI, so I don’t know how the setup maps.

Did you tick this option when creating the container:

image

?

avidrissman commented 4 years ago

Yes, that’s checked.

In earlier versions of the container, it would never need to be restarted. Now I launch the Synology UI and there are always dozens of alerts that the container had to be restarted.

On Feb 11, 2020, at 5:38 AM, Ricardo Amaral notifications@github.com wrote:

 I do this in Synology’s GUI, so I don’t know how the setup maps.

Did you tick this option when creating the container:

?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.

rfgamaral commented 4 years ago

But how are you doing the auth process if you're creating the container through Synology's UI?

Either way, can you please do the whole process from scratch in the CLI instead? You can always use a different container name to avoid clashing. And please report back if the same problem persists...

avidrissman commented 4 years ago

I went into the GUI and made the container. I sshed into the NAS to authenticate.

This was in a version before it was shutting itself down, and the auth has held since.

I can play with it further later.

On Feb 11, 2020, at 7:25 AM, Ricardo Amaral notifications@github.com wrote:

 But how are you doing the auth process if you're creating the container through Synology's UI?

Either way, can you please do the whole process from scratch in the CLI instead? You can always use a different container name to avoid clashing. And please report back if the same problem persists...

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.

avidrissman commented 4 years ago

I updated and still have this issue. The latest log tail:

[info]   350 files pending to be uploaded in folder '/photos'.
[info]   350 processed files: 350 successfully, 0 with errors
[cmd] run exited 0
[cont-finish.d] executing container finish scripts...
[cont-finish.d] done.
[s6-finish] waiting for services.
[s6-finish] sending all processes the TERM signal.
[s6-finish] sending all processes the KILL signal and exiting.

At that point the container shut itself down, the Synology relaunched it a few times:

image

at which point I caught it and manually turned it off.

Given that gphotos-uploader-cli is a run-and-done tool, it seems to me that this is an issue with the container being set up to shut down after the tool is complete rather than sit around and let the cron happen.

My workaround is to manually launch the container every time I have a batch of photos.

I’m open to helping diagnose, but not open to re-creating my docker instance. This is my real data and I’m not open to doing anything that would risk my Google Photos collection.

avidrissman commented 4 years ago

I know nothing about Docker, but as per https://stackoverflow.com/questions/28212380/why-docker-container-exits-immediately:

A docker container exits when its main process finishes.

Perhaps you need to have your main process just sleep, and rely on the the gphotos-uploader-cli to be spawned by cron?

rfgamaral commented 4 years ago

Perhaps you need to have your main process just sleep, and rely on the the gphotos-uploader-cli to be spawned by cron?

This is exactly how it's implemented since I first released this container.

I’m open to helping diagnose, but not open to re-creating my docker instance. This is my real data and I’m not open to doing anything that would risk my Google Photos collection.

I don't have this issue and I can't reproduce it, so I need your help understanding what's going on. You don't need to recreate your docker instance, you can create a new one. And that's exactly what I need you to do, as I've asked you before:

Either way, can you please do the whole process from scratch in the CLI instead? You can always use a different container name to avoid clashing.

The important bits here are to do this through the CLI (and not Synology UI) and use a different container name (as to not affect your production container, gphotos-uploader-test for instance) and point both the config and photos volumes to different directories than your real data ones (for testing purposes). You can use the same Google account and credentials because your new photos directory for this new container will be empty, so nothing will be uploaded.

After doing this you should see two containers in the Synology UI (one created through the UI, the other through CLI) with different names. Let them (or just the test one) run for a couple of days and check if this test container also shuts itself down. Please follow the README instructions carefully.