trapexit / mergerfs

a featureful union filesystem
http://spawn.link
Other
4.19k stars 170 forks source link

Rsyncing existing 9TB mergerfs pool to new NAS with clean identical mergerfs pool but not balancing. #1169

Closed antonical closed 1 year ago

antonical commented 1 year ago

I am in the process of migrating a NAS to a new NAS I have essentially cloned the source OS and stood up the new one on a different IP. I have exactly the same setup on both. Directory structure permissions etc. On the source NAS the mergerfs pool is really well balanced.

I am rsyncing from one mergerfs pool to the other. But on the destination mergerfs pool it is not balancing it seems.

Source:

/dev/sdd1                           1.8T  1.3T  562G  70% /media/WCC300671740
/dev/sdc1                           3.6T  2.8T  818G  78% /media/WCC7K7RU6E2D
/dev/sdb1                           3.6T  2.8T  819G  78% /media/N8GYGV0Y
N8GYGV0Y:WCC7K7RU6E2D:WCC300671740  9.0T  6.9T  2.2T  77% /media/TWHGNAS

Work in progress rsync Destination:

/dev/sdc1                                                                        2.7T  504G  2.2T  19% /media/Z2982BX10000C3471QYZ
/dev/sdd1                                                                        2.7T  6.0M  2.7T   1% /media/Z1Z6MW960000W515S78H
/dev/sde1                                                                        2.7T   51G  2.7T   2% /media/Z291J5M900009215C789
/dev/sdf1                                                                        2.7T  118M  2.7T   1% /media/Z291J59700009219394J
2982BX10000C3471QYZ:1Z6MW960000W515S78H:291J5M900009215C789:291J59700009219394J   11T  555G   11T   6% /media/NEWTWHGNAS

Just seems to be syncing to a single drive and not across the drives. Do I need to change my mount config somehow to sort this I am <1Tb through a 6.9 TB rsync.

Any help is appreciated.

Cheers Tony

trapexit commented 1 year ago

I need to know what your settings are to comment further but my guess is you've not set your create policies to what you want.

trapexit commented 1 year ago

https://github.com/trapexit/mergerfs#why-are-all-my-files-ending-up-on-1-filesystem

antonical commented 1 year ago

Thanks for responding.

So are you saying I need to add mfs to the mount command above on the destination NAS so from: /media/N8GYGV0Y/:/media/WCC7K7RU6E2D/:/media/WCC300671740 /media/TWHGNAS fuse.mergerfs defaults,allow_other,use_ino,hard_remove,minfreespace=100M 0 0

To: /media/Z2982BX10000C3471QYZ:/media/Z1Z6MW960000W515S78H:/media/Z291J5M900009215C789:/media/Z291J59700009219394J /media/NEWTWHGNAS fuse.mergerfs defaults,allow_other,use_ino,hard_remove,minfreespace=100M,category.create=mfs 0 0

Are there any other options you think might be sensible when starting from scratch on a new system?

Thank you for your help. I had a fear that the new drives are 3TB and the old ones are 4TB so using the default empfs it could run out of space on the destination.

Cheers Tony

trapexit commented 1 year ago

You need to select a policy you want, yes. If you want files distributed across the branches then you need to select one that will do so. mfs, lus, rand, pfrd, etc.

My basic suggestions are in the docs: https://github.com/trapexit/mergerfs#basic-setup If there is something in particular you're looking for I can comment further but I need to know what you want out of the setup.

antonical commented 1 year ago

Ideally, as this is a completely clean set of drives (well apart from the 600GB already copied) and there are 4 identical drives this time. I would want to distribute the source across all the drives and have them balanced once the rsync finishes.

We have a complete remote copy of the entire NAS directory structure and files elsewhere and they are synced overnight every night.

The new NAS has faster 7.2K sas drives attached via a LSI 9211-8i and 4 rather than 3 and all are running at 6 Gb/s rather than 3. I would expect by distributing across the 4 drives we will get more performance as multiple clients will potentially be reading from different drives.

The fstab mounts we currently use are above before I kill the rsync would you add any other mount options?

Would you just delete what I have already copied and start again? Change the mount and then restart the rsync?

Thank you very much for the support. I will await your guidance.

Cheers Tony

antonical commented 1 year ago

Hey, thanks for the help. I didn't delete what had already been copied but killed the rsync and unmounted the pool. Then amended the mount to include category.create=mfs and then remounted the pool and restarted the rsync.

Working brilliantly and is binging the rest of the drives up in sync I guess until they catch up to the drive that had been syncing as a single directory on one drive then I expect it will bring them all up together.

Cheers Tony

trapexit commented 1 year ago

Yeah, mfs or rand or pfrd is what I'd use in this case. You could have changed the value at runtime without needing to stop rsync but really not big deal.

would you add any other mount options?

I really can't comment without additional, very explicit knowledge of the usecase. For example: caching can get screwed up if you are changing things out of band.

antonical commented 1 year ago

No worries, I'll stick with this for now and see how it goes but it seems to be working.

Cheers Tony