Closed robertbaker closed 4 years ago
Hi, I use the cache overhead of rclone to do similar stuff and it may be possible to even filter the uploaded file on the mount command.
Since this already fill my need, I will not take time to implement MergeFS but feel free to implement it by forking this project. The main logic is in the file https://github.com/sapk/docker-volume-rclone/blob/master/rclone/driver/driver.go and use https://github.com/docker/go-plugins-helpers to ease communication with docker daemon.
We can keep open this issue if you have question and need any help.
It seems that the rclone union functionality has improved since I last looked at it, so I may not need mergerfs if the built in rclone union will hardlink now. I'm going to try making a union remote and see if it works with this driver and works as expected.
Please update the readme with dependencies needed to build this locally. I want to update this to the latest rclone beta as there is a vfs download bugfix that I want. If I could build and install my build, I could update rclone anytime I wanted. I also ask you to please consider making a beta
branch that uses rclone/rclone:beta
, I'm not sure if the docker-plugin command allows you to specify a branch, but if it does, it would be useful.
Here's an update on my testing:
So my first attempt to use an rclone union with this plugin didn't work (v0.0.8). However, I upgraded to the new release and that works. This is likely b/c I was using config/options that were not in the rclone version bundled with the last release.
rclone.config
[remote]
[...]
[union]
type = union
upstreams = /local remote:/:nc
create_policy = ff
docker-compose.yml
myapp:
volumes:
- "union:/data"
volumes:
union:
driver: sapk/plugin-rclone:latest
driver_opts:
config: "${RCLONE_CONFIG_BASE64}"
args: "--drive-skip-gdocs --timeout=1h --dir-cache-time=300h --allow-other"
remote: "union:/"
I have confirmed that inside a container, the union is mounted at /data and the remote is mounted. I can also confirm that when I create a file inside /data, it gets created but doesn't get uploaded to the mount (works as I expect).
However, a file should have been created in /local in the container. I'm not sure where the files I'm creating is being stored. Inside the container, there was no /local folder created. So I created it and ran /local/test2
and ls /data
. The file didn't show up in the union (at /data). I ran touch /data/test3
and ls /local
and the file was created in the union but it's not under /local.
Which leads me to the question, where are the files stored inside the container?
I checked the host system and created a /local
directory just to verify it wasn't mapping a local folder on the host, it's not (good). They should be in /local (per my rclone union config), but that is obviously not being mapped in the container, which makes sense actually.
I looked in a few places both inside the container and on the host.
On the host I found it here (guid likely to be different) /var/lib/docker/plugins/1f637ed76ef430dc53867ec7960a7864354dfb38dfdde997d4ad8993bd6c9450/rootfs/local/test2
I tried to create another volume and mount that at /local
and see if the files created under the union end up in that volume, that didn't work. Which makes sense because this driver is running rclone in a container. So that means that the mounts are created inside the rclone/rclone container and it's then exposed through that. So in that rclone container, that is where my /local must be stored.
The reason why I need the /local
folder is because I use a script to run rclone move
to move local files to the remote. I use a union (was using mergerfs) to facilitate this. The benefit here is that the mount itself isn't uploading files (which isn't recommended due to some limitations), and allows more control over when files upload, you can use service accounts, etc.
I probably could get away with not needing the union and just relying on the mount to upload, but then I lose some control and the ability to use service accounts to upload. I also still need the ability to exclude a directory so it doesn't upload stuff that isn't processed or ready to be uploaded.
I perhaps could work around this somehow and point the upload script at the union, but then I would need to figure out how to filter out all of remote files. So far I haven't found a way to do that. My script basically runs rclone move /local remote:/
, if I ran that using rclone move /remote remote:/
it would most likely download the remote file and then reupload it causing a duplicate or some other weirdness.
I'll keep at it.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
~I apologize that this isn't related to this rclone volume driver, but I couldn't find another way to make this request.~
~MergerFS is commonly used with rclone, take a look at trapexit/mergerfs for more details on what it does. It's basically creates a union of other directories, a union fuse mount.~
~This is a needed driver as it would remove the need to have a container for a mergerfs mount.~
~A common use-case is to have a merged directory that merges a local folder (a cache) and an rclone remote together, then you bind your apps to the merged directory and things go into one of the local branches based on the rules you set. Typically, a script is used to have rclone upload the data.~
~I may take a stab at forking this code and seeing if I can adapt it for mergerfs but I have no experience with volume drivers, so I thought I would ask first if you had any plans to do this. I'm not even sure if it could work, probably technical limitations. You basically would need a way to have the mergerfs volume, depend on / use another volume for a mountpoint (branch as mergerfs calls them).~
RClone has a union backend, but using it might be tricky with this plugin.