thedataflows / autorclone

Automates rclone sync
MIT License
2 stars 0 forks source link

Doesn't detect rclone running and errors out #1

Open DarrenPIngram opened 2 years ago

DarrenPIngram commented 2 years ago

An interesting application that I hope to get running urgently thanks to sync screwing up badly and leaving many Tbs in duplicated files!

However, on Debian 11, I am getting the following error. I would not have imagined it needs its own rclone installation and it should surely be capable of using the existing rclone, even if other unrelated operations are ongoing (rclone itself is happy to do that).

Unless there is user error somewhere!

Installation. unzipped archive, copied to /home/user/.local/bin, chmod +x, verifies it works with autorclone -h

autorclone sync /home/user/storage/GDLIB1_DIST5 GDAU:Library
ERRO[0000] Failed to sync '/home/user/storage/GDLIB1_DIST5' to 'GDAU:Library' because '/usr/bin/rclone' is already running with PID '3133' 
INFO[0000] Finished. 0 tasks successful. 1 tasks failed. 
cr1cr1 commented 2 years ago

Wow, the very first user (besides myself) for this. No clue how you found it, but here you are...

Indeed it is relying on external rclone, does not require a particular one, being just a wrapper over any rclone installation (in PATH) to automate it.

So, there is a bug in rclone process detection on Linux. Let me check on that.

DarrenPIngram commented 2 years ago

Great! I searched to find a way to force a cleanup sync after exhausting what I could try with rclone. So really hoping your wrapper can delete the 'wrong data' and keep it in sync.

I like the multi destination operation too, I can see that being of use/reducing script nunbers sometime too.

Look forward to you figuring out this little 'roadblock'.

D

—- Via iPad. Hopefully with not too many typos.

On 14. Feb 2022, at 9.31, cr1cr1 @.***> wrote:

 Wow, the very first user for this. No clue how you found it, but here you are...

Indeed it is relying on external rclone, does not require a particular one, being just a wrapper over any rclone installation to automate it.

So, there is a bug in rclone process detection. Let me check on that.

— Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android. You are receiving this because you authored the thread.

cr1cr1 commented 2 years ago

Great! I searched to find a way to force a cleanup sync after exhausting what I could try with rclone. So really hoping your wrapper can delete the 'wrong data' and keep it in sync.

Please keep in mind that all file operation is done by rclone itself. I just found a combination of parameters that are passed to rclone to do a clean sync. For me, it works (except Junction vs Dir symlink on Windows - for that I am considering a PR to rclone). Other than that, it does keep destinations clean, because a sync means deleting whatever is extra in the destination. My goal is to use this tools regularly for a 3-way sync (or N-way sync) via cloud storage providers.

I like the multi destination operation too, I can see that being of use/reducing script nunbers sometime too. Look forward to you figuring out this little 'roadblock'. D —- Via iPad. Hopefully with not too many typos.

Thanks, the idea is to run it as a daemon in the background, use notify (or similar mechanism) to detect file changes and perform sync by itself, not bothering the user besides initial setup and, most probably some conflict resolution from time to time.

cr1cr1 commented 2 years ago

I have tested on a Debian... works just fine. It will look for any existing rclone process already running and stop if one is found. I am aiming to filter based on command line arguments as well to make it more... accurate. For now, only one rclone is allowed at any given moment.

DarrenPIngram commented 2 years ago

Get the same error with the new version.

I do have several scripts running.

pgrep rclone 3133
9246
18033
22764
26849
28485
82247
100808
105663
108877

Could the code detect many incidences? Will it be possible in the future to not need exclusive use of rclone processes?

I killed all processes and rerunning the script worked (well it is ongoing).

I had to ctrl-c out a couple of times to change args and sometimes rclone was still running according to ps aux as a process and the script didn't detect/kill that single incidence.

(Going forward I note your earlier comments). I'd find it preferable to only run/call on-demand with arguments (or in a script) as I have many sync batches from different sources.

Hope this helps!

tks.

cr1cr1 commented 2 years ago

Could the code detect many incidences? Will it be possible in the future to not need exclusive use of rclone processes?

Already changed to a better library of process detection, and now the full rclone command is matched. So no two rclone jobs with the same arguments should run. This means your other rclone running instances should not be taken into account, they all should run at the same time now (assuming you did not start rclone manually with the exact same parameters as autorclone does).

Autorclone does not kill other instances, just detects them. I do not think is a good idea to kill anything on a guest system :)

I'd find it preferable to only run/call on-demand with arguments (or in a script) as I have many sync batches from different sources.

This will always be an option , since this is how I run it myself right now.

I am planning to introduce the concept of jobs: just sets of rclone source and destinations to be spawned as children rclone jobs.

Then when running as daemon or background service, will also run these jobs on a predefined schedule. All this optional of course, always being able to run stuff on demand, from terminal or scripts or other programs

DarrenPIngram commented 2 years ago

"Already changed to a better library of process detection, and now the full rclone command is matched. So no two rclone jobs with the same arguments should run."

You have been busy for sure! Hopefully other people will discover this project and maybe when you've a version you are happy to publicise you can get the word out too.

"I am planning to introduce the concept of jobs: just sets of source"

I can see the attraction of this when it stops people needing to write simple scripts and utilise cron. You have some ambitious plans for the application!

On Mon, 14 Feb 2022 at 23:05, cr1cr1 @.***> wrote:

Could the code detect many incidences? Will it be possible in the future to not need exclusive use of rclone processes?

Already changed to a better library of process detection, and now the full rclone command is matched. So no two rclone jobs with the same arguments should run. This means you other rclone running instances should not be taken into account, they all should run at the same time now (assuming you did not start rclone manually with the exact same parameters as autorclone does).

Autorclone does not kill other instances, just detects them. I do not think is a good idea to kill anything on a guest system :)

I'd find it preferable to only run/call on-demand with arguments (or in a script) as I have many sync batches from different sources.

This will always be an option , since this is how I run it myself right now.

I am planning to introduce the concept of jobs: just sets of souce and destinations to be spawned as children rclone jobs.

Then wen running as damein, will also run these jobs on a drfined schedule.

— Reply to this email directly, view it on GitHub https://github.com/thedataflows/autorclone/issues/1#issuecomment-1039557012, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABV5MAF2CYMSTP2VP7KLPCTU3FVBFANCNFSM5OJ4ECFQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

You are receiving this because you authored the thread.Message ID: @.***>