Closed mrmachine closed 5 years ago
Update: I think the issue did not just resolve itself when I got distracted for half an hour. I think I manually executed fusermount -uz /mnt/gdrive
(and forgot). I can reproduce:
root@vultr:~# systemctl status gdrive
● gdrive.service - GDrive Daemon
Loaded: loaded (/etc/systemd/system/gdrive.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2018-09-25 00:51:11 UTC; 1min 37s ago
Main PID: 1098 (rclone)
Tasks: 9 (limit: 1110)
CGroup: /system.slice/gdrive.service
└─1098 /usr/bin/rclone --allow-non-empty --allow-other mount gdrive: /mnt/gdrive --uid=1000 --gid=1000 --size-only --dir-cache-time=2m --vfs-read-chunk-size=64M --vfs-cache-max-age
Sep 25 00:51:11 vultr.guest systemd[1]: Started GDrive Daemon.
root@vultr:~# kill -9 1098
root@vultr:~# systemctl status gdrive
● gdrive.service - GDrive Daemon
Loaded: loaded (/etc/systemd/system/gdrive.service; enabled; vendor preset: enabled)
Active: failed (Result: signal) since Tue 2018-09-25 00:52:56 UTC; 3s ago
Process: 1098 ExecStart=/usr/bin/rclone --allow-non-empty --allow-other mount gdrive: /mnt/gdrive --uid=1000 --gid=1000 --size-only --dir-cache-time=2m --vfs-read-chunk-size=64M --vfs-cache
Main PID: 1098 (code=killed, signal=KILL)
Sep 25 00:51:11 vultr.guest systemd[1]: Started GDrive Daemon.
Sep 25 00:52:56 vultr.guest systemd[1]: gdrive.service: Main process exited, code=killed, status=9/KILL
Sep 25 00:52:56 vultr.guest systemd[1]: gdrive.service: Failed with result 'signal'.
root@vultr:~# ls -la /mnt/gdrive
ls: cannot access '/mnt/gdrive': Transport endpoint is not connected
root@vultr:~# fusermount -uz /mnt/gdrive
root@vultr:~# ls -la /mnt/gdrive
total 12
drwxr-xr-x 3 root root 4096 Sep 23 13:10 .
drwxr-xr-x 19 root root 4096 Sep 24 04:28 ..
drwxr-xr-x 2 root root 4096 Sep 23 13:10 plexguide
root@vultr:~# systemctl status gdrive
● gdrive.service - GDrive Daemon
Loaded: loaded (/etc/systemd/system/gdrive.service; enabled; vendor preset: enabled)
Active: failed (Result: signal) since Tue 2018-09-25 00:52:56 UTC; 29s ago
Process: 1098 ExecStart=/usr/bin/rclone --allow-non-empty --allow-other mount gdrive: /mnt/gdrive --uid=1000 --gid=1000 --size-only --dir-cache-time=2m --vfs-read-chunk-size=64M --vfs-cache
Main PID: 1098 (code=killed, signal=KILL)
Sep 25 00:51:11 vultr.guest systemd[1]: Started GDrive Daemon.
Sep 25 00:52:56 vultr.guest systemd[1]: gdrive.service: Main process exited, code=killed, status=9/KILL
Sep 25 00:52:56 vultr.guest systemd[1]: gdrive.service: Failed with result 'signal'.
root@vultr:~# systemctl start gdrive
root@vultr:~# systemctl status gdrive
● gdrive.service - GDrive Daemon
Loaded: loaded (/etc/systemd/system/gdrive.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2018-09-25 00:53:29 UTC; 2s ago
Main PID: 1198 (rclone)
Tasks: 8 (limit: 1110)
CGroup: /system.slice/gdrive.service
└─1198 /usr/bin/rclone --allow-non-empty --allow-other mount gdrive: /mnt/gdrive --uid=1000 --gid=1000 --size-only --dir-cache-time=2m --vfs-read-chunk-size=64M --vfs-cache-max-age
Sep 25 00:53:29 vultr.guest systemd[1]: Started GDrive Daemon.
Ya I ran into this on a test. Will kook to see what can be done
Did some rewrites and have not seen any reports. Please open if encountered
Describe the bug
Not sure why, but after some seemingly random amount of time my gdrive service will be killed, and it does not start up again.
I am unable to manually restart it:
But after attempting to restart it, I got distracted for half an hour, and it seems that a second attempt to start it worked:
To Reproduce
When running, kill
rclone
:Additional attempts to restart it are not working, though. Maybe I need to wait longer?
Expected behavior
What is affected by this bug?
When does this occur?
Not what what is killing rclone, but the failure to cleanup seems to happen any time rclone is killed.
Please tell us about your environment: (REPLACE EXAMPLE VALUES BELOW WITH YOUR ACTUAL VALUES)
Logs
Additional context
On my old server, I had a bash script that trapped TERM and EXIT to execute
fusermount -uz
and that seemed to work reliably. Perhaps we need to execute a similar script inExecStart
instead of executing rclone directly, so we can more reliably execute cleanup?Here's my script: