trapexit / mergerfs

a featureful union filesystem
http://spawn.link
Other
4.3k stars 174 forks source link

Use Case Question #458

Closed Animosity022 closed 6 years ago

Animosity022 commented 6 years ago

I mainly have been trying this out for my Plex along with my GDrive as I was using unionfs before. I've gotten majority setup like I'd want but can't seem to figure out this use case.

My goal is to always write to my local disk first and never write to my cloud drive. 2nd, only remove / delete from my cloud drive as I want it to be read/write.

So Local Disk /Movies

Cloud /Movies/Cool Movie/File.lowquality.mkv

My workflow would finally grab a higher quality but it doesn't seem to write to my first disk listed and tries to write to the cloud drive.

If i make the cloud read-only, it won't create the directory on Local disk first to copy the file so it fails.

So my expected workflow would be to:

At a later date, I upload to my cloud drive and remove the local directories.

Is there a good way to do that workflow as I hated having the hidden directories on unionfs to clean up.

trapexit commented 6 years ago

Are you saying files will be under the same relative path with the same names? You could tweak function policies to get something that works like what you described but it wouldn't be guaranteed.

If you just use policy lfs for create can cloud reports more storage than your local drives then local will always be used and deletes will happen wherever that file exists. If the files are different names then no problem. If they are the same .... you're not going to get that to work exactly as you want.

Animosity022 commented 6 years ago

The relative path would be the same.

The actual file names would be different so something like:

xxx-720p.mkv would be replaced by xxx-1080p.mkv

So copy the 1080p locally and than remove the 720p file from the cloud.

The issue I hit is that the local drive doesn't have the directory name so it doesn't seem to want to create the directory and than move the file locally.

trapexit commented 6 years ago

Read about path preservation in the docs.

Animosity022 commented 6 years ago

So if I use:

ignorepponrename=true

That gets me so much closer as with a ro cloud mount, it creates the file on the first disk, which is my local drive.

If I move the testing to a rw cloud mount, it seems to try to write to the cloud disk rather than the first since the directory is there.

Any magic to force to write to the first disk always?

trapexit commented 6 years ago

No. Some policies are path preserving, some not. You'd want one that's not. ignorepponrename is for what it says... a path perserving policy's renames. It's a special case.

Please read about policies. It explains exactly what you're asking. Set the create policies to ff and it picks the first found.

Given the order of the drives, as defined at mount time or configured at runtime, act on the first one found. For create category functions it will exclude readonly drives and those with free space less than minfreespace (unless there is no other option).

Animosity022 commented 6 years ago

Thanks!

I ended up going with this:

/usr/bin/mergerfs -o defaults,allow_other,use_ino,category.action=all,category.create=ff,category.search=all /local/tv:/media/TV /Test

Which did exactly what I wanted it to do.

trapexit commented 6 years ago

search = all doesn't really make any sense as it mentions it's the same as ff. Does no harm but just unnecessary.