l3uddz / cloudplow

Automatic rclone remote uploader, with support for multiple remote/folder pairings. UnionFS Cleaner functionality: Deletion of UnionFS whiteout files and their corresponding files on rclone remotes. Automatic remote syncer: Sync between different remotes via a Scaleway server instance, that is created and destroyed at every sync.
GNU General Public License v3.0
338 stars 48 forks source link

[feature] remote support multiple remotes (for people don't have access to googleworkspace admin but can manually share drives) #125

Closed hereisderek closed 1 year ago

hereisderek commented 1 year ago

Describe the problem for people don't have access to google workspace admin but can manually share drives, so that we can manually create a few google account and share access to all of them, and use them to bypass the daily upload limit.

You can already achieve that today but with a lot of duplicate configs

Describe any solutions you think might work

{
    "upload_media": {
      "upload_media": {
        "hidden_remote": ["remote1:", "remote2:", "remote3:", "remote4:"],
        "rclone_command": "move",
        "rclone_excludes": [],
        "rclone_extras": {},
        "rclone_sleeps": {},
        "remove_empty_dir_depth": 2,
        "sync_remotes": "",
        "sync_folder": "",
        "upload_folder": "/mnt/local/Media",
        "upload_remotes": ["remote1", "remote2", "remote3", "remote4"],
        "upload_remote_folder": "/Media"
      }
    },
    "syncer": {},
    "uploader": {
      "upload_media": {
        "check_interval": 30,
        "exclude_open_files": true,
        "max_size_gb": 200,
        "opened_excludes": [
          "/downloads/"
        ],
        "service_account_path": "",
        "size_excludes": [
          "downloads/*"
        ]
      }
    }
  }

In the above proposed json (subject to change as you see fit of course), notice this changed part:

        "upload_remotes": ["remote1", "remote2", "remote3", "remote4"],
        "upload_remote_folder": "/Media"

which is essentially what previously "upload_remote": "" except that you can specify multiple remotes, and it will carry out the upload in sequence with the exact same configuration. if any error happens (e.g. unable to find the share or location), it will jump to the next remote. For each remote, it will query against all the files that's currently in the targeting remote, so even if any remote does not have the same share, no file will actually be lost due to being uploaded to the wrong remote

Additional context Add any other context or screenshots about the feature request here.

saltydk commented 1 year ago

You can already add multiple remotes to cloudplow.

hereisderek commented 1 year ago

if i understand correctly, to use multiple remotes to bypass the 750g daily limit, requires you to:

  1. define multiple entries in remotes, with everything else exactly the same, except for the rclone remote. (not sure what to do with the "hidden_remote", no idea what writeout files in unionfs do).
  2. create multiple uploader/syncer with everything else identical but correspond to different remotes each.

what i really wanna see, is that, instead of creating all those duplicates, we use a list instead, and all the exclusions, rclone_params etc are shared.

P.s, maybe i did understand wrong, but i still get Error 403: User Rate Limit Exceeded even after switching to a different remote (client_id/secrect was using the same though, just linked to different google account)

saltydk commented 1 year ago

No, if all you want to do is work around that limit you supply it with the service accounts. Anyway, I am closing this.

hereisderek commented 1 year ago

the reason i had to use multiple accounts is that i don't have admin access to workplace, i can share the drive to other people (or dummy google accounts that i manually create) but not able to create the service account file to programmatically do it for me

saltydk commented 1 year ago

Regardless, what you're asking for here is still possible with the current config options.