l3uddz / cloudplow

Automatic rclone remote uploader, with support for multiple remote/folder pairings. UnionFS Cleaner functionality: Deletion of UnionFS whiteout files and their corresponding files on rclone remotes. Automatic remote syncer: Sync between different remotes via a Scaleway server instance, that is created and destroyed at every sync.
GNU General Public License v3.0
338 stars 48 forks source link

rclone_sleeps settings are ignored #110

Closed lordyod closed 2 years ago

lordyod commented 2 years ago

Describe the bug Changing the settings for rclone_sleeps in config.json does not change the amount of time the remote sleeps.

To Reproduce Steps to reproduce the behavior:

  1. Change rclone_sleeps settings to something other than default (example: "sleep": 12 instead of "sleep": 25)
  2. Restart cloudplow
  3. Wait for the timeout to occur
  4. See error

Expected behavior Using the following settings I expect the remote to time out for 12 hours:

            "rclone_sleeps": {
                "Failed to copy: googleapi: Error 403: User rate limit exceeded": {
                    "count": 5,
                    "sleep": 12,
                    "timeout": 3600
                }
            },    

Screenshots

As soon as the remote hits the rate limit I get this in the log:

Mar 29 03:19:33 holodeck python3[21723]: 2022-03-29 03:19:33,898 - INFO       - cloudplow            - check_suspended_uploaders      - FILES is still suspended due to a previously aborted upload. Normal operation in 23 hours, 59 minutes and 58 seconds at 2022-03-30 03:19:32

Logs

A trimmed debug log, this was taken about 12 hours after the timeout started but with a sleep value of 12 it should probably have started now shouldn't it?

System Information

saltydk commented 2 years ago

Did you clear the cache when you edited the configuration?

lordyod commented 2 years ago

Yes. After removing /opt/cloudplow/cache.db and restarting the service, uploading runs until the rate limit is hit, and then the sleep timer is reset to 25 hours instead of 12.

Edit: Also, wouldn't expected behavior be to start uploads immediately if the current date is past the timeout date + the sleep period?

saltydk commented 2 years ago

I think I've found the place in the code that just sets it to 25 and could probably tweak that but to me it seems pointless to change to a lower value since API bans are usually always around that long.

Do you have an actual reason for wanting it lower?

lordyod commented 2 years ago

Aside from a documented feature not functioning at all?

I'd like to set it to operate more frequently for shorter time periods.

saltydk commented 2 years ago

Service accounts are the only way around running when you get banned. So I don't see the point of the feature at all, I didn't make it.

All you achieve by running it prior to the ban is started an upload when you're still banned anyway, so what are you trying to gain here?

lordyod commented 2 years ago

This isn't true for google drive remotes. I just reset my cache.db and restarted and cloudplow starts uploading again, which it will continue to do until it hits the rate limit. This is the behavior I'd expect out of the sleeps setting.

saltydk commented 2 years ago

Bans on Google Remotes last 24 hours (I'm talking API bans here and not cloudplow). So if you get 'banned' by cloudplow without actually getting API banned you need to tweak your settings.

lordyod commented 2 years ago

Ah, looking at their docs I see they reset at midnight and some posts indicate it may depend on which server you are connecting to. So this leaves me wondering:

saltydk commented 2 years ago

I believe the reset is a rolling window and not at midnight but as with anything Google related their docs say one thing and reality another.

I cannot speak to the reasoning behind everything in Cloudplow as I was not really involved at its creation. The rclone_sleeps feature was probably made redundant when rclone added the ability to exit on API ban a few years ago. The creator of this tool has since created https://github.com/l3uddz/crop which is what most of us now use these days. I just try to bug fix any bits that break when possible.

lordyod commented 2 years ago

Thanks for the info. I'll leave this open but probably it'd be simplest to fix the docs/sample config.