binhex / arch-rclone

Docker build script for Arch Linux base with rclone
GNU General Public License v3.0
9 stars 4 forks source link

Allow different remote media share name #1

Open ChasakisD opened 3 years ago

ChasakisD commented 3 years ago

First of all thank you for creating a rclone docker container.

When rclone is executed copies or syncs every media share in {RCLONE_MEDIA_SHARES} in the same remote media share name.

Use Case:

Is there any way where the media rclone media share syncs to the base directory of OneDrive?

binhex commented 3 years ago

hmm i get what you are after, but im not sure rclone can do this currently, it copies up the entire path including the base directory.

binhex commented 3 years ago

quick thought the only thing you could try is to add in another bind mount with another name, for instance OneDrive, so:-

  1. create new 'path' /OneDrive that points to host path /mnt/user/OneDrive
  2. set RCLONE_MEDIA_SHARES to /OneDrive this will then result in your cloud provider having a single folder called 'OneDrive' with all your media that you want sync'd up.
binhex commented 3 years ago

i guess i COULD have another env var that allows you to strip the parent folder out from RCLONE_MEDIA_SHARES, it would have to be another env var as im very aware that if i do this by default it could mean a mass upload for everybody currently using this image, so something like RCLONE_STRIP_PARENT and then if this is set to 'yes' then i simply remove the first directory in the media share path, so for example, '/media/pictures' turns into '/pictures' on the remote end, i will have a little think about the best way to do this.

SebaGnich commented 2 years ago

Hey @binhex :) Thanks for the container, great work!

I signed up for Backblaze today and I'm still encountering this issue. When do you think you can fix this? Because normally rclone doesn't add the parent dir, e.g. when I use this, it works (with a default bucket):

rclone sync /media Backblaze:Unraid/

I use your container atm just for the rclone-binary, but the automatic syncing looks great!

Serph91P commented 1 year ago

i guess i COULD have another env var that allows you to strip the parent folder out from RCLONE_MEDIA_SHARES, it would have to be another env var as im very aware that if i do this by default it could mean a mass upload for everybody currently using this image, so something like RCLONE_STRIP_PARENT and then if this is set to 'yes' then i simply remove the first directory in the media share path, so for example, '/media/pictures' turns into '/pictures' on the remote end, i will have a little think about the best way to do this.

@binhex have the same problem, any ETA on this?

clairekardas commented 9 months ago

This is exactly the issue I am facing.

I am currently moving away from Google Drive and my first step is to copy all data from the root directory of my drive to /media/gdrive on my Unraid machine. Of course, this didn't work as the default config only allows the same share names.

What I did to solve the issue was move all my drive contents to the folders /media/gdrive, and then the command executed flawlessly:

rclone copy from remote 'gdrive:/media/gdrive' to local share '/media/gdrive'

Of course, this is only an option if you do not care about the folder structure of the source, like in my case. Would be good to have native support for different share names @binhex.

Nonetheless this is a great docker container of rclone, thank you!

sercxanto commented 8 months ago

Same issue here. I like to sync a remote nexcloud folder to a local unraid share. The current implementation assumes the the remote folder to have the same name as the unraid share, but in my case this does not work, I would need a command line like this:

rclone sync nextcloud:abc/pictures /media/my_pictures

Effectively the current implementation calls something like this:

rclone sync nextcloud:abc/pictures/mypictures /media/my_pictures

I would vote for a configuration which allows arbitrary folder structures to be synced, i.e. a remote sub-subfolder to a local sub-folder of a share with different names.

sercxanto commented 8 months ago

OK, I digged a bit deeper and know now the reason why it doesn't work in my case. For a possible solution, see below :-)

In my case I like to synchronize the nextcloud folder /abc/pictures to /media/my_pictures:

rclone sync nextcloud:/abc/pictures /media/my_pictures

My first idea was to use the following settings:

RCLONE_MEDIA_SHARES="/media/my_pictures"
RCLONE_REMOTE_NAME="nextcloud:abc/pictures"

However the following line in start.sh assumes that the remote folder name is the same as the local folder name. It appends the local folder name to the remote:

sync_direction="${rclone_remote_name_item}:${bucket_name}${rclone_media_shares_item} ${rclone_media_shares_item}"

This results in

rclone sync nextcloud:/abc/pictures/media/my_pictures /media/my_pictures"

which obviously does not work. :-)

In general it does not work in cases where the local folder does not match the remote one.

Possible solution

How about adding a new variable RCLONE_REMOTE_DIR which is split internally to rclone_remote_dir_item and initialized by default with rclone_media_shares_item if not set (to be backward compatible)?

This way one could have more flexibility regarding the the directory mapping and the line above could look like the following:

sync_direction="${rclone_remote_name_item}:${bucket_name}${rclone_remote_dir_item} ${rclone_media_shares_item}"

In my case I would use the following settings then:

RCLONE_MEDIA_SHARES="/media/my_pictures"
RCLONE_REMOTE_NAME="nextcloud"
RCLONE_REMOTE_DIR="abc/pictures"

Workaround

There is also a workaround to the current situation by adding a custom container path mapping to make sure so that the local name matches the remote one.

Add a new path:

and set

RCLONE_MEDIA_SHARES="/abc/pictures"
RCLONE_REMOTE_NAME="nextcloud"

This works with the current implementation, however the need to create a new mapping is a bit weird.

binhex commented 8 months ago

I will look into this guys just give me a bit of time, I got a lot of plates spinning it you know what I mean 😁

sercxanto commented 8 months ago

@binhex Its not urgent, there is at least a workaround. And yes, I know what you mean. :-) Thank you for all the wonderful docker images BTW. :star_struck: