Closed DominicMe closed 4 years ago
I was under the impression that when the drive with existing path is full even with path preserving policy new path would be created on a free disk. Is this not how it should behave?
No. If that were the case the path wouldn't be preserved.
As it mentions in the docs:
A path preserving policy will only consider drives where the relative path being accessed already exists.
and
If all branches are filtered an error will be returned. Typically EROFS (read-only filesystem) or ENOSPC (no space left on device) depending on the reasons.
If you want it to create new paths and don't care which drive it'll be on then why are you using path preservation?
I see. I do in fact care but obviously if the drive is full then there is no reason or choice but to use another drive. I just want mergerfs to use existing paths when they exist and when disk is not full. Having mergerfs write to a random disk leads to having copies of directories all over the place and files/metadata split up for no good reason.
Having mergerfs write to a random disk leads to having copies of directories all over the place and files/metadata split up for no good reason.
1) Why is that a problem? Inode count shouldn't generally be a problem. 2) There are good reasons. Performance and spreading of risk. If you colocate files and the access patterns are such that they get hit at the same time the performance will be lower than if you've liberally spread them around. And if data is less spread out the risk of data loss from any one drive becomes skewed.
I just want mergerfs to use existing paths when they exist and when disk is not full.
I don't really understand what you mean. With any churn of data (files deleted and added) over time you will end up with directories across multiple branches. If you clone some deep subdirectory then delete data from the original drive you're going to be mixing data across those two branches. Over time it'd look a lot like no path preservation.
How exactly would you have it behave when it finds that there are no available branches given the policy constraints? What's the logic for determining what branch to choose?
Its a problem if you have to do anything manually, especially during recovery.
Valid reasons but also some serious downsides. Performance can improve in some scenarios but what if drives are in sleep mode? How many minutes will it take to spin up up to 10 or more drives one by one looking for all files spread across all drives in a directory? As far as spreading the risk I would argue risk is much higher with spreading liberally. If you have files in 1 dir then it's likely they are related. For example if you have a game with its files spread across 10 drives you are essentially increasing risk of game being unusable by 10. Its not so much the spreading I have an issue with its the randomness of it. Its fine to spread movie dir but what purpose is there to separate for example 2 parts of a movie or a movie and its metadata across disks? I rarely delete stuff so most files would remain where they are and when I do delete it will often be a whole directory. Probably the biggest issue is that the drive would be spinning up a lot more than they need to, a big power, spin-up performance, drive wear and even noise issue.
How exactly would you have it behave when it finds that there are no available branches given the policy constraints? What's the logic for determining what branch to choose?
I would have it fall back on non path-preserving policy, ideally with the ability to choose it. In my use case I would pick eplfs and if there is no space would fall back to lfs or just ff.
what if drives are in sleep mode?
You seem to misunderstand how things work. mergerfs does nothing to limit hitting drives nor is it practical. If it is asked for a directory or file it, by nature of the system, scan all drives. The only only way that won't result in a drive waking up is if the kernel has happened to cache all that data. I know of 1, yes 1, person to accomplish building a system where drives won't spin up under most situations and that required a very large amount of zfs arc caching. If you ls
a directory in most situations will have to spin up all the drives.
As far as spreading the risk I would argue risk is much higher with spreading liberally.
How? If you have 2 drives and 100% of your data is on Drive0 and 0% on Drive1 and you lose 0... you lose 100% of your data. If you lose 1. 0%. If it were split 50/50 then you'd lose 50%. The risk in the first situation is skewed. (ignoring the fact that other metrics could be used to analyze risk.)
rarely delete
You don't use the pool for torrents or tmp directories?
Its fine to spread movie dir but what purpose is there to separate for example 2 parts of a movie or a movie and its metadata across disks?
But that will happen with regular policies as a fallback. Not as often but it will.
What's wrong with changing mkdir policy? That's what most people do.
Having configurable callbacks means having to add a dozen new config functions or oddly adding it to only a subset of them. policies don't have settings. I'd need to create such a thing and allow settings for every function's policies or multiply the number of policies by the number of settings. epmfs-lfs epmfs-lus... etc.
You seem to misunderstand how things work. mergerfs does nothing to limit hitting drives nor is it practical. If it is asked for a directory or file it, by nature of the system, scan all drives.
Are you saying that if a user is working out of a directory that is fully contained within one drive, that mergerfs will keep scanning all drives over and over even though only one drive contains the active part of the tree?
The risk in the first situation is skewed.
The risk is up to the user and what their use case prefers. Some folks don't like to risk the loss of a portion of every single one of their directories' contents. Some folks prefer to have their drives divided by the type of content they contain, so they lose their 'music' drive or their 'video' drive, instead of swiss cheesing the contents of both directories. This also applies to folks using SnapRAID, in that if a drive goes down and they are rebuilding, they don't want to have to roll the dice on which files are missing or not. It's better to just know that was the drive containing x/y/z and that it will be unavailable while the rebuild is taking place.
How exactly would you have it behave when it finds that there are no available branches given the policy constraints?
Sounds like another use case for my 'walk back the path' suggestion, including the ability to specify which profile is used when running up against moveonenospc and/or minfreespace.
You seem to misunderstand how things work. mergerfs does nothing to limit hitting drives nor is it practical. If it is asked for a directory or file it, by nature of the system, scan all drives. The only only way that won't result in a drive waking up is if the kernel has happened to cache all that data. I know of 1, yes 1, person to accomplish building a system where drives won't spin up under most situations and that required a very large amount of zfs arc caching. If you
ls
a directory in most situations will have to spin up all the drives.
That's kind of the point I am trying to make, if all files in a dir are on lets say the first drive only that 1 drive will spin up simply because there is no reason for the system/mergerfs to continue looking at other drives as all files are found/listed. For example if your movies are all on 1 disk kodi will only spin up that 1 drive when browsing metadata or watching movie. I also plan on having a dir on an ssd with symlinks and certain metadata like folder.jpg, .nfo (moved there by script) so no drives would need to spin up unless you request large file like a movie. You could ls to your hearts content then :)
How? If you have 2 drives and 100% of your data is on Drive0 and 0% on Drive1 and you lose 0... you lose 100% of your data. If you lose 1. 0%. If it were split 50/50 then you'd lose 50%. The risk in the first situation is skewed. (ignoring the fact that other metrics could be used to analyze risk.)
You are assuming there is much free space. My mergerfs pool is 93% full so there is not even 1 drive empty. Not much point in having empty drives unless its a replacement drive. If I did have lots of free space I would increase minfreespace to better spread the data.
The idea of letting drives sleep is partly to reduce wear. I mean you almost never need to access more than 1 disk at once assuming files are not spread too much. In normal use I estimate I would only need 1 disk of 10 to be on maybe 16 hours a day if I can eliminate directory listing and metadata caused spin up.
Are you saying that if a user is working out of a directory that is fully contained within one drive, that mergerfs will keep scanning all drives over and over even though only one drive contains the active part of the tree?
mergerfs doesn't cache results. It's a proxy. The kernel already does caching. And across a very large tree that data would add up. Even if it did cache are you arguing it'd scan the entire filesystem at startup to populate that cache? And how would it expire the cache? Would it? If it did would it be based on time? Meaning it'd have to do scans anyway when the timeout expires? What about dealing with something like plex that is reading paths, files, file metadata, etc? What about managing software that keeps files open like torrent apps or similar? This isn't easy nor is there one solution. There are some very different access patterns out there.
The risk is up to the user and what their use case prefers.
Most people don't consider such things so it is relevant to bring up.
It's better to just know that was the drive containing x/y/z and that it will be unavailable while the rebuild is taking place.
But if you don't follow path preservation that's exactly what happens. After 2 years you won't know what's where without looking. It's happened to many of my users.
Sounds like another use case for my 'walk back the path' suggestion, including the ability to specify which profile is used when running up against moveonenospc and/or minfreespace.
Walk back the path is a difference of degrees in terms of entropy increase. If you don't limit the walk back then it's almost the same as not having it. If you do limit then it moves the problem. You will eventually end up with a ENOSPC and need to seed another drive to add space. Even if you made the depth configurable per directory somehow you'd run into that situation.
minfreespace is intended to be a filter. What you're suggesting is not the same thing. policies don't act due to minfreespace. They ignore branches that don't meet it. moveonenospc is different altogether. The proposal was simply to offer a policy rather than hardcoding to a mfs like way policy. The reason it doesn't have it is because it was never needed. A bulk of users don't hit it (and the feature existed before policies anyway). It can have serious runtime impacts and not everyone wants that silent move (if it's even possible, depending on the setup it's not).
rarely delete
You don't use the pool for torrents or tmp directories?
No, all downloads go into an SSD to prevent spin up and get moved to pool every 12 hours or so. My mergerfs pool is basically an archive, only delete when upgrading really.
Its fine to spread movie dir but what purpose is there to separate for example 2 parts of a movie or a movie and its metadata across disks?
But that will happen with regular policies as a fallback. Not as often but it will.
I am not looking for perfection just to reduce spin up to a minimum reasonably possible. A script can also move files periodically to correct this something resembling mergerfs.consolidate.
What's wrong with changing mkdir policy? That's what most people do.
Having configurable callbacks means having to add a dozen new config functions or oddly adding it to only a subset of them. policies don't have settings. I'd need to create such a thing and allow settings for every function's policies or multiply the number of policies by the number of settings. epmfs-lfs epmfs-lus... etc.
Not sure I understand, can you elaborate on how I would use mkdir policy to achieve directory consolidation? by mkdir policy do you mean create policy?
That's kind of the point I am trying to make, if all files in a dir are on lets say the first drive only that 1 drive will spin up simply because there is no reason for the system/mergerfs to continue looking at other drives as all files are found/listed. For example if your movies are all on 1 disk kodi will only spin up that 1 drive when browsing metadata or watching movie.
How does it know that? Of course there are reasons. It doesn't know what it doesn't know. It doesn't know what isn't there.
You are assuming there is much free space. My mergerfs pool is 93% full so there is not even 1 drive empty. Not much point in having empty drives unless its a replacement drive.
I'm not assuming anything. I have many many different users' use cases to account for. My "assumptions" have to include the set of all known user usecases. I don't know your usecase. I can't speak to it. Without you going into great detail on your setup and usage patterns I have to assume nothing in particular.
The idea of letting drives sleep is partly to reduce wear.
The data around wear is mixed. There have been reports that that in fact greatly increases wear and lower life span.
I mean you almost never need to access more than 1 disk at once assuming files are not spread too much. In normal use I estimate I would only need 1 disk of 10 to be on maybe 16 hours a day if I can eliminate directory listing and metadata caused spin up.
On your system maybe. I have multiple TB of files being accessed regularly. Over time the likelihood I can manage what drives they are on so as to limit spinup approaches zero quickly.
I am not looking for perfection just to reduce spin up to a minimum reasonably possible. A script can also move files periodically to correct this something resembling mergerfs.consolidate.
If you're only hitting your system on occasion then you shouldn't be getting spinup. mergerfs proxies the requests as they come in. No requests... no proxying.
Not sure I understand, can you elaborate on how I would use mkdir policy to achieve directory consolidation? by mkdir policy do you mean create policy?
No. I mean mkdir polcy. It's all in the docs and suggested in the FAQ on this very topic.
Set mkdir=rand and create=epmfs or whatever. No file is created without first a mkdir. You have control over both.
I also plan on having a dir on an ssd with symlinks and certain metadata like folder.jpg, .nfo (moved there by script) so no drives would need to spin up unless you request large file like a movie. You could ls to your hearts content then :)
Then you've already solved the problem (except for any software that follows symlinks). Not sure I understand what then you're asking for.
What are you proposing for mergerfs to do? Assume nothing changes on the drives and then cache every single file's attr data and the path data for every directory? For every directory and file keep a list of branches it exists on? How does that get populated? Updated? Etc? This would be a massive change. Have you tried using the caching already provided by fuse for all this stuff? There are a few.
That's kind of the point I am trying to make, if all files in a dir are on lets say the first drive only that 1 drive will spin up simply because there is no reason for the system/mergerfs to continue looking at other drives as all files are found/listed. For example if your movies are all on 1 disk kodi will only spin up that 1 drive when browsing metadata or watching movie.
How does it know that? Of course there are reasons. It doesn't know what it doesn't know. It doesn't know what isn't there.
Thats true, though my symlink script would eliminate need to spin up drives for ls, so only file access would cause spin up. Even as it is now if I for example run touch /mnt/pool/file-x and file-x is on disk0 only that 1 disk will spin up.
You are assuming there is much free space. My mergerfs pool is 93% full so there is not even 1 drive empty. Not much point in having empty drives unless its a replacement drive.
I'm not assuming anything. I have many many different users' use cases to account for. My "assumptions" have to include the set of all known user usecases. I don't know your usecase. I can't speak to it. Without you going into great detail on your setup and usage patterns I have to assume nothing in particular.
The point I was making is that it is not higher risk when all drives are almost full anyways. I am also not suggesting removal or forcing of any options/features, just extra options.
The idea of letting drives sleep is partly to reduce wear.
The data around wear is mixed. There have been reports that that in fact greatly increases wear and lower life span. True, but I think that data is more from a datacenter use case. If a drive is off for 2 weeks then spins up for 2h and goes back to sleep, its got to be better than it spinning for the full 2 weeks. Then there's power waste, noise and vibrations that could affect drives close by.
I mean you almost never need to access more than 1 disk at once assuming files are not spread too much. In normal use I estimate I would only need 1 disk of 10 to be on maybe 16 hours a day if I can eliminate directory listing and metadata caused spin up.
On your system maybe. I have multiple TB of files being accessed regularly. Over time the likelihood I can manage what drives they are on so as to limit spinup approaches zero quickly.
I get your use case, I dont have the upload for that :) If file access is frequent then there is no reason to try consolidating files, drives will be spinning anyways...
Thats true, though my symlink script would eliminate need to spin up drives for ls, so only file access would cause spin up.
Depending on your ls options. If you use GNU ls and run `ls -l' it will lstat the target.
The point I was making is that it is not higher risk when all drives are almost full anyways. I am also not suggesting removal or forcing of any options/features, just extra options.
Extra options means extra features. And the way in which the feature and config is managed is important. And some extra features may impact existing behavior or whatnot. I need very specific and explicit descriptions of the features suggested.
I also plan on having a dir on an ssd with symlinks and certain metadata like folder.jpg, .nfo (moved there by script) so no drives would need to spin up unless you request large file like a movie. You could ls to your hearts content then :)
Then you've already solved the problem (except for any software that follows symlinks). Not sure I understand what then you're asking for. Partly solved. It would still cause unnecessary spin up if for example you have 1 episode on disk 1 and the next on disk 9. Worse still if you have image album and every slide would require a new disk to spin up.
What are you proposing for mergerfs to do? Assume nothing changes on the drives and then cache every single file's attr data and the path data for every directory? For every directory and file keep a list of branches it exists on? How does that get populated? Updated? Etc? This would be a massive change. Have you tried using the caching already provided by fuse for all this stuff? There are a few.
I am not asking mergerfs to do anything related to caching/symlinking I mentioned, that's a job for an external script.
The only thing I would like mergerfs to do is provide some way to consolidate directories. Not after the fact but as the files/dirs are first being written. I want directories to remain on as few disks as the pools allows. So if I have 10x 1TB disks in mergerfs pool and 10 dirs containing 1TB worth of files they should be on 1 drive each and not spread across duplicated directories across many drives. As soon as 1 dir gets too large to fit on 1 disk it spills over to disk2 etc... This is assuming nothing is moved after the initial write (Thats an issue for the script to solve).
Walk back the path is a difference of degrees in terms of entropy increase. If you don't limit the walk back then it's almost the same as not having it. If you do limit then it moves the problem. You will eventually end up with a ENOSPC and need to seed another drive to add space. Even if you made the depth configurable per directory somehow you'd run into that situation.
Yup, and I would be limiting that entropy increase to the overflow drive. As it is now, the overflow always goes to mfs, so instead of limiting to one overflow drive as a catch all, I'm getting random stuff spread across the remaining empty drives, evenly filling them, and missing the point altogether. That's why I proposed walking back the path in the first place, as it would give the user the ability to add drives and then pre-create their 'videos', 'music', etc folders where they wanted them, and the walk back would then naturally steer the overflow to the appropriate drives (keeping entropy under control).
Set mkdir=rand No. I mean mkdir polcy.
That doesn't fix the issue for his use case (or mine). All it does is put stuff all over the place, which is exactly what he and I (and I'd imagine other EP profile users) are trying to avoid.
I need very specific and explicit descriptions of the features suggested.
I gave you exactly that in my post, and it appears that solution would solve his problems as well, but you shot me down for not correctly explaining my problem even though I was presenting you with a solution that would prevent posts like this one here from even happening in the first place. As more and more folks with our use case adopt MergerFS, I suspect you will see more of these types of posts.
The root issue is that there is no middle ground between EP and non-EP. You assume everyone will be fine with mfs/lfs/etc, but that's just not the case for large archives / sleeping drives / cases where folks need some better control over what goes where, or at least some consolidation / grouping of files on drives.
minfreespace is intended to be a filter. What you're suggesting is not the same thing. policies don't act due to minfreespace.
Yes, but they could, and if they did (if configured to walk back), it would leave some space on the drive while creating the path on another drive, which would then allow the EP profile to continue normally, filling the other drive and leaving some free space on the full one. (the use case for this is that if using SnapRAID with equal space drives, you might need to leave a few GB on each data drive to compensate for party file/drive block size overhead). (it also would minimize running up against moveonenospc)
I am not asking mergerfs to do anything related to caching/symlinking I mentioned, that's a job for an external script.
You earlier were asking why mergerfs isn't keeping track of what directories and files exist where.
consolidate directories
OK... but how? People do it now by setting mkdir policy.
So if I have 10x 1TB disks in mergerfs pool and 10 dirs containing 1TB worth of files they should be on 1 drive each and not spread across duplicated directories across many drives.
You're being too vague / simplistic. What is the algorithm for selection of the branch to pick.
The root issue is that there is no middle ground between EP and non-EP. You assume everyone will be fine with mfs/lfs/etc, but that's just not the case for large archives / sleeping drives / cases where folks need some better control over what goes where, or at least some consolidation / grouping of files on drives.
No, I'm not. I'm trying to establish exactly the expectations and requirements and most people don't need them but will go on goose chases proposing ideas on how to get what they think they need.
The beginning of this conversation was not a full description of what the person was trying to accomplish. Without that I have to run through my routine of asking questions to understand the situation and establish its legitimacy. You might know what you want but most people who post questions really don't. Hence the many FAQ entries about what options to use.
I gave you exactly that in my post,
And you aren't him. This is a new ticket and I have to establish what's going on.
I didn't shoot you down. I'm trying to understand the situation.
minfreespace is intended to be a filter. What you're suggesting is not the same thing. policies don't act due to minfreespace.
Yes, but they could, and if they did (if configured to walk back), it would leave some space on the drive while creating the path on another drive, which would then allow the EP profile to continue normally, filling the other drive and leaving some free space on the full one.
They could what? policies don't "act". They are algos that select a branch. I honestly don't understand what you're getting at. A walk back is not changing minfreespace.
I am not asking mergerfs to do anything related to caching/symlinking I mentioned, that's a job for an external script.
You earlier were asking why mergerfs isn't keeping track of what directories and files exist where.
I really dont think I ever asked that. The only time I brought up caching scripts was in response to your reply stating why my suggestion would not be useful. Caching in that case was my counterpoint to show a way how it could be made to work. Best to refer to my original post and ignore caching as it was never part of my request.
consolidate directories
OK... but how? People do it now by setting mkdir policy.
Ok, this is actually something I still dont get. Looking at the git page it seems the eplfs create policy should do what I want but it doesnt. eplfs for me is ignoring minfreespace and is filling the disk 100%. Why is it ignoring it first of all? Let assume I start with an empty mergerfs pool and run "mkdir dir1". I am assuming dir1 gets created on the first disk in the pool. What happens when dir1 gets filled with files and exceeds space on the first disk? It cant give out of space error can it? I mean that would mean default policy would also do this which goes against what mergerfs should do and that is turn many storage devices into 1 seamlessly. Hopefully you can explain, as it doesnt make sense how it works or should work.
So if I have 10x 1TB disks in mergerfs pool and 10 dirs containing 1TB worth of files they should be on 1 drive each and not spread across duplicated directories across many drives.
You're being too vague / simplistic. What is the algorithm for selection of the branch to pick.
Well my other replies made more complex examples but that caused confusion so I tried simplifying.
Policies are functions. They take in a number of arguments (branches, branch tags, minfreespace, etc.) and return a list of branches to act on. What I need is to know the metrics and algo that generates that list. If you leave it up to me and I don't fully understand your expectation we'll just be here again arguing about it.
I really dont think I ever asked that. The only time I brought up caching scripts was in response to your reply stating why my suggestion would not be useful.
Sorry it was @malventano. I'm multitasking with work and just missed that someone else entered the conversation at that point.
Ok, this is actually something I still dont get. Looking at the git page it seems the eplfs create policy should do what I want but it doesnt. eplfs for me is ignoring minfreespace and is filling the disk 100%. Why is it ignoring it first of all?
https://github.com/trapexit/mergerfs/blob/master/src/policy_eplfs.cpp#L65
It's not.
The description of eplfs is "Of all the branches on which the relative path exists choose the drive with the least free space."
Are you positive that the relative path exists on other branches?
Let assume I start with an empty mergerfs pool and run "mkdir dir1". I am assuming dir1 gets created on the first disk in the pool.
No. It depends on your mkdir policy.
I mean that would mean default policy would also do this which goes against what mergerfs should do and that is turn many storage devices into 1 seamlessly.
Seamlessly is a matter of opinion and not something I can comment on.
I'm not really sure what you're asking. There is no "should". It's a very configurable product. What "should" happens depends on the configuration.
Ok, this is actually something I still dont get. Looking at the git page it seems the eplfs create policy should do what I want but it doesnt. eplfs for me is ignoring minfreespace and is filling the disk 100%. Why is it ignoring it first of all?
https://github.com/trapexit/mergerfs/blob/master/src/policy_eplfs.cpp#L65
It's not.
So should eplfs respect minfreespace or not? For me it is ignoring it, which doesnt make sense to me, why have it at all if at least some policies are ignoring it.
The description of eplfs is "Of all the branches on which the relative path exists choose the drive with the least free space."
Are you positive that the relative path exists on other branches?
No, rel paths do not exist but if that is expected behavior its really flawed. If you start from an empty pool, create 1 dir and copy files into it it will eventually give out of space error even though the pool is not full only 1 disk is full. This means you will have to manually create dir on each disk.
Let assume I start with an empty mergerfs pool and run "mkdir dir1". I am assuming dir1 gets created on the first disk in the pool.
No. It depends on your mkdir policy.
What is the expected behavior under eplfs policy?
I mean that would mean default policy would also do this which goes against what mergerfs should do and that is turn many storage devices into 1 seamlessly.
Seamlessly is a matter of opinion and not something I can comment on.
I'm not really sure what you're asking. There is no "should". It's a very configurable product. What "should" happens depends on the configuration.
By should I mean it is described as such, for example "Handling of writes to full drives (transparently move file to drive with capacity)". Its also compared to mhddfs which would not give out of space errors if 1 drive filled up.
So should eplfs respect minfreespace or not? For me it is ignoring it, which doesnt make sense to me, why have it at all if at least some policies are ignoring it.
Yes.
https://github.com/trapexit/mergerfs#filters
"All create policies will filter out branches which are mounted read-only, tagged RO (read-only) or NC (no create), or has available space less than minfreespace."
It's not being ignored. I linked the code that does the check. You are expecting it to work a way it doesn't nor is described as working.
No, rel paths do not exist but if that is expected behavior its really flawed. If you start from an empty pool, create 1 dir and copy files into it it will eventually give out of space error even though the pool is not full only 1 disk is full.
It's exactly the behavior described in the docs. It's not flawed. It is for a specific purpose. It might be "flawed" for your wants. Path preservation is for preserving paths. Strictly. Some people need that. So it exists. If you don't want that then don't use it. If you want something different then describe what you want and I can look into it.
This means you will have to manually create dir on each disk.
Yes! That's the point. To allow people full control. To strictly control where files live. You may not need that but some do.
What is the expected behavior under eplfs policy?
Exactly as described in the docs: "Of all the branches on which the relative path exists choose the drive with the least free space."
https://github.com/trapexit/mergerfs#policy-descriptions
By should I mean it is described as such, for example "Handling of writes to full drives (transparently move file to drive with capacity)". Its also compared to mhddfs which would not give out of space errors if 1 drive filled up.
mergerfs works exactly as described in the docs. Not how you imagine it should work. mhddfs is extremely simple and has almost no flexibility. You can make mergerfs work exactly as mhddfs if you configure it to do so as described in the docs..
mergerfs works exactly as described in the docs. Not how you imagine it should work.
Since we have two separate people within a week who both imagined that it should work differently than it actually does, can we turn this and my thread into a feature request? I'm fairly certain my proposed solution would fix his problem, along with everyone else who desires middle ground between EP and non-EP.
So should eplfs respect minfreespace or not? For me it is ignoring it, which doesnt make sense to me, why have it at all if at least some policies are ignoring it.
Yes.
Just to confirm you mean yes, as in eplfs SHOULD respect minfreespace?
It's not being ignored. I linked the code that does the check. You are expecting it to work a way it doesn't nor is described as working.
So why it it being ignored for me? I have minfreespace=50GB and eplfs policy but it is filling the drive to 100%. It is using existing branches but is filling to 100% and ignoring minfreespace. Why is this happening? I don't know what I could miss-understand here, its described one way but I am seeing the opposite behavior. My setup is described in the OP exactly as it is now. Please look into this as I have no clue why its doing that.
Yes! That's the point. To allow people full control. To strictly control where files live. You may not need that but some do.
Ok, but how is getting an out of space error for no apparent reason (pool is not empty) giving anyone full control? Its completely un-intuitive and unless you know the ins and outs of mergerfs you will conclude its broken. Its certainly not described as working like this in the docs, quite the opposite and I read the docs several times even after reading this reply I still do not see it as described as you say. No other pooling software works like this that I am aware. There is also no way to change this behavior so how can you say its giving control to anyone? As a feature, fine but as default behavior will be seen as a bug by any reasonable person who doesn't know the internals of mergerfs. I mean just think about this for a moment from a new users perspective, they set it up with default options and get an out of space error when the pool is not full, they will conclude its a bug. Its a pooling software that "pools" only the first disk by default.
Since we have two separate people within a week who both imagined that it should work differently than it actually does, can we turn this and my thread into a feature request? I'm fairly certain my proposed solution would fix his problem, along with everyone else who desires middle ground between EP and non-EP.
It already is. I already said I'd look into it.
Just to confirm you mean yes, as in eplfs SHOULD respect minfreespace?
As I've said a number of times: yes.
So why it it being ignored for me? I have minfreespace=50GB and eplfs policy but it is filling the drive to 100%.
None of your drives are fully filled. In fact several are around the 50GB mark which is exactly as you'd expect over time. lfs policies is "least free space". It's going to fill drives and then when minfreespace is hit that drive is filtered out. I just tried it on a fresh setup and it worked exactly as described and exactly as it has for years.
Ok, but how is getting an out of space error for no apparent reason (pool is not empty) giving anyone full control?
As I've said multiple times. Some people need the ability to control EXACTLY where data lives. That is what path preservation as it is today gives them.
Its completely un-intuitive and unless you know the ins and outs of mergerfs you will conclude its broken.
It works exactly as described. If you don't read the docs then I'm not sure what to tell you.
Its certainly not described as working like this in the docs,
Yes, it does.
https://github.com/trapexit/mergerfs#functions--policies--categories
"A path preserving policy will only consider drives where the relative path being accessed already exists." "All create policies will filter out branches which are mounted read-only, tagged RO (read-only) or NC (no create), or has available space less than minfreespace." "If all branches are filtered an error will be returned. Typically EROFS (read-only filesystem) or ENOSPC (no space left on device) depending on the reasons." "Of all the branches on which the relative path exists choose the drive with the least free space."
It works exactly as described in those quoted sections from the docs.
No other pooling software works like this that I am aware.
And? mergerfs does several things that other pooling software doesn't. It's used to do a more eclectic set of things.
There is also no way to change this behavior so how can you say its giving control to anyone?
What are you talking about? There are tons of configurable options: https://github.com/trapexit/mergerfs#options
There are 21 different functions each with 14 different policies for each. Some overlap due depending on the category but you're talking about hundreds of permutations of functionality.
As a feature, fine but as default behavior will be seen as a bug by any reasonable person who doesn't know the internals of mergerfs.
The docs describe this several times in detail. You act like this decision was made randomly or that I, who have been writing and supporting this for several years now, don't understand the use cases for people who asked for the features that are there. Path preservation was asked to be default from people who had already filled systems and wanted to control the layout. Not from people who had empty systems and didn't care. You literally can't do both at the same time. Defaults are defaults and in fact I plan on removing many defaults because people don't read the docs and then go and assume their needs mergerfs should just know magically. People want conflicting features. Your favorite default isn't that of others. I should never have had defaults for policies. I can't change that now without breaking existing users. If you don't like that reality... though. It is what it is.
Its a pooling software that "pools" only the first disk by default.
If you read the docs it talks about literally everything you've talked about. https://github.com/trapexit/mergerfs#why-are-all-my-files-ending-up-on-1-drive
If you read the docs you'd know why I can't just go and change the settings and plan to in a major new release for this very reason.
The source code is completely open. If you don't like the way it works you can change it. If you want to contribute that change you're welcome to. Or not. If you don't have the skills and would like me to do it you can articulate your use case and I can look into it when I get around to it. It's as simple as that.
I'm not going to continue to argue with you about how the software works (exactly as described) or why those features exists (because people asked for it).
I feel like there has been too many misunderstandings in this thread. To clarify I am not trying to attack you in any way or even argue. I am having an issue that I am trying to get to the bottom of. Hopefully I can clarify below.
Just to confirm you mean yes, as in eplfs SHOULD respect minfreespace?
As I've said a number of times: yes.
Just confirming as you responded with a yes to multiple question paragraph.
So why it it being ignored for me? I have minfreespace=50GB and eplfs policy but it is filling the drive to 100%.
None of your drives are fully filled. In fact several are around the 50GB mark which is exactly as you'd expect over time. lfs policies is "least free space". It's going to fill drives and then when minfreespace is hit that drive is filtered out. I just tried it on a fresh setup and it worked exactly as described and exactly as it has for years.
I didn't say the example shows drives are filled. If I copy a files now they fill up to 100%. They are at 50gb in the example because I moved files manually. Please see updated df -h and fstab in the OP.
Ok, but how is getting an out of space error for no apparent reason (pool is not empty) giving anyone full control?
As I've said multiple times. Some people need the ability to control EXACTLY where data lives. That is what path preservation as it is today gives them.
And its a great feature but there is nothing in the middle which is what I feel most people would choose.
Its completely un-intuitive and unless you know the ins and outs of mergerfs you will conclude its broken.
It works exactly as described. If you don't read the docs then I'm not sure what to tell you.
Its certainly not described as working like this in the docs,
Yes, it does.
If feel like you are ignoring what is being said and picking out of context bits. I explained that mergerfs is filling the drives 100% for me that is the opposite of the docs. I am not saying the docs are wrong, I am saying that it does not work form me how it is described, I don't know why which is why I am asking, I am not arguing, just trying to understand why it is not working for me like it should.
There is also no way to change this behavior so how can you say its giving control to anyone?
What are you talking about? There are tons of configurable options: https://github.com/trapexit/mergerfs#options
You took this out of context entirely. I said essentially that there is no middle ground, its either strict path preserving or no patch preserving. I did not say that there is a lack of config options in general.
As a feature, fine but as default behavior will be seen as a bug by any reasonable person who doesn't know the internals of mergerfs.
The docs describe this several times in detail. You act like this decision was made randomly or that I, who have been writing and supporting this for several years now, don't understand the use cases for people who asked for the features that are there.
I definitely did not mean to imply that I think that features/options are implemented randomly in any way. I don't. I see uses for most options and am glad they are there.
Path preservation was asked to be default from people who had already filled systems and wanted to control the layout. Not from people who had empty systems and didn't care. You literally can't do both at the same time.
Defaults are defaults and in fact I plan on removing many defaults because people don't read the docs and then go and assume their needs mergerfs should just know magically. People want conflicting features. Your favorite default isn't that of others. I should never have had defaults for policies. I can't change that now without breaking existing users. If you don't like that reality... though. It is what it is.
Its not really the defaults that are at issue IMHO, its that there is no middle ground policy to be the default. You did already mention you will be looking into something like the middle-ground option so no need to discuss it any more now.
If feel like you are ignoring what is being said and picking out of context bits. I explained that mergerfs is filling the drives 100% for me that is the opposite of the docs. I am not saying the docs are wrong, I am saying that it does not work form me how it is described, I don't know why which is why I am asking, I am not arguing, just trying to understand why it is not working for me like it should.
I had asked if the relative path was on each drive and you said "No, rel paths do not exist".
I can only work off the data provided. As the template says for submitting these tickets I need everything about the situation. From what I've pieced together from the op and later I don't see any problems.
Please see updated df -h and fstab in the OP.
It looks the same to me. They all have space available. If this isn't correct can you repost all the info and exactly the steps to reproduce. I just tested the policy and it works exactly as I'd expect. I'll post the example if you need it.
/dev/sdi1 7.3T 7.2T 61G 100% /mnt/disk3
/dev/sdh1 11T 11T 23G 100% /mnt/disk1
/dev/sdf1 7.3T 7.2T 54G 100% /mnt/disk9
/dev/sdk1 11T 11T 51G 100% /mnt/disk2
/dev/sdc1 15T 14T 639G 96% /mnt/disk4
/dev/sdj1 13T 13T 277G 98% /mnt/disk0
/dev/sdd1 15T 7.7T 6.8T 54% /mnt/disk6
/dev/sdg1 7.3T 7.2T 48G 100% /mnt/disk8
/dev/sdb1 13T 13T 108G 100% /mnt/disk5
/dev/sde1 7.3T 7.2T 49G 100% /mnt/disk7
You took this out of context entirely. I said essentially that there is no middle ground, its either strict path preserving or no patch preserving. I did not say that there is a lack of config options in general.
You said: "There is also no way to change this behavior so how can you say its giving control to anyone?"
Which is false. Policies are how you change the behavior. The fact a policy doesn't work the way you want it doesn't mean there isn't flexibility or that there isn't a way to change it. There is. There are many forms to choose from and many permutations. Just not one you like apparently.
Its not really the defaults that are at issue IMHO, its that there is no middle ground policy to be the default. You did already mention you will be looking into something like the middle-ground option so no need to discuss it any more now.
What you call "middle ground" is something different all together. It does noone any good to criticize what's there and reuse already adopted language. You don't think path preservation should be as "strict" as it is... fine. But I'm not going to just change it because you think it should work some other way and reuse the name. I asked a number of times for you to describe the policy you'd like to see. I'm not interested in you coming back later and complaining that it doesn't do what you want.
# mkdir disk0 disk1 pool
# truncate -s 1G disk0.img
# truncate -s 1G disk1.img
# mkfs.ext4 -m0 disk0.img
# mkfs.ext4 -m0 disk1.img
# mergerfs -o use_ino,allow_other,category.create=eplfs,minfreespace ./disk0:./disk1 ./pool
# dd if=/dev/zero bs=1M count=256 of=pool/file0
# ls disk1
file0
# dd if=/dev/zero bs=1M count=256 of=pool/file1
# ls disk1
file0 file1
# df -h pool
Filesystem Size Used Avail Use% Mounted on
0:1 2.0G 517M 1.4G 27% /mnt/1.5TB-00/pool
# df -h disk0 disk1
Filesystem Size Used Avail Use% Mounted on
/dev/loop0 976M 2.5M 958M 1% /mnt/1.5TB-00/disk0
/dev/loop3 976M 515M 446M 54% /mnt/1.5TB-00/disk1
# dd if=/dev/zero bs=1M count=256 of=pool/file2
# ls disk0
file2
# dd if=/dev/zero bs=1M count=256 of=pool/file3
# ls disk0
file2 file3
# dd if=/dev/zero bs=1M count=256 of=pool/file4
dd: failed to open 'pool/file4': No space left on device
# df -h pool disk0 disk1
Filesystem Size Used Avail Use% Mounted on
0:1 2.0G 1.1G 891M 54% /mnt/1.5TB-00/pool
/dev/loop0 976M 515M 446M 54% /mnt/1.5TB-00/disk0
/dev/loop3 976M 515M 446M 54% /mnt/1.5TB-00/disk1
Since mkdir=eplfs as well any directories would fall onto the drive with the existing base path that has the least free space.
What you call "middle ground" is something different all together. It does noone any good to criticize what's there and reuse already adopted language. You don't think path preservation should be as "strict" as it is... fine. But I'm not going to just change it because you think it should work some other way and reuse the name. I asked a number of times for you to describe the policy you'd like to see. I'm not interested in you coming back later and complaining that it doesn't do what you want.
I don't think he's asking you to change the behavior of any particular profile. He's just looking for a middle ground option (as was I). Again, I offered a solution, and I believe the middle ground that I suggested in the other thread would make MergerFS work in the way that he is looking for. If you'd like me to document the suggestion in a more detailed way, I'd be more than happy to, but generally speaking it would be an option, that when set to a value of the preferred tree depth to walk back to, triggers a search for alternate drives whenever going down the moveonenospc and minfreespace code paths. No need to set a specific profile for this, as the same already set profile can just be followed for each step of the walk back. If it finds an existing path before walking back to the predefined depth, great, just replicate the full path on that other drive and continue. If there are no other drives with enough of the path present, do whatever would have happened during normal execution of that code (e.g. report out of space). If this option is not set, then all ep profiles, moveonenospc and minfreespace behave exactly as they did before. Zero impact on existing configs.
The source code is completely open. If you don't like the way it works you can change it. If you want to contribute that change you're welcome to. Or not. If you don't have the skills and would like me to do it you can articulate your use case and I can look into it when I get around to it. It's as simple as that.
Honestly, you've been pretty resistant to the idea that the current options might not successfully cover all use cases. Neither he nor I am likely to be able to contribute code to the level matching the rest of your codebase, and even if I brought myself up to that level, my assumption (based on your resistance thus far) is that you wouldn't be receptive to additional options, meaning I'd have to maintain my own fork.
If you would prefer to not waste so much time arguing with folks in the future, then I recommend when you see folks come in with free space issues using path preserving policies, and those same people want to keep using path preserving policies (for whatever reason) without running into those free space warning issues, just tell them that you may or may not be working on some middle ground later on. You aren't going to be able to convince everyone, even those who just want to keep going with what the default started them with, that they need to switch over to mkdir=rand or mkdir=epall and completely change how their array was storing data after they already got used to the default epmfs over a period of months or years. It also doesn't do much good if when multiple people assumed it worked a different way than documented, that you keep insisting the documentation is perfect while the reality is that the concepts in the documentation are easily assumed to work differently than they are written. When multiple people assume it worked that other way, that makes the concepts / documentation the outlier, and it's easier to address that than to waste your time here trying to convince every individual that they / their preferences all wrong.
As it is currently, all it takes is a user to set up an array using defaults, make one directory, and then start filling in other subdirectories beneath it. Unless that user makes other directories off of the root, allowing epmfs to place them on other drives, that user will fill only one drive to capacity and get free space errors, which they will then complain about here. The same applies to all ep profiles, meaning that half of the possible MergerFS configurations will eventually put people in this situation. The only thing stopping you from getting a higher rate of complaints here is that it takes most people a while to fill a drive. Folks bringing in their own sets of disks with preexisting data will run into it sooner, just as Dominic and I have.
Yes, there are legitimate uses for the ep profiles as they have been defined, but it probably shouldn't have been the default given what happens in the above example of general usage. Understood that changing that default will potentially break existing installations, so given that it is what it is, you have the power to give folks a less drastic way out of this free space issue that all general users of the default profile will eventually run into.
I'm not responding to all of that.
I've offered to implement features numerous times. If you don't tell me what you want I can't do it. No one has articulated the metrics and algo you want. Until you do so I'm finished.
If you don't tell me what you want I can't do it.
Umm, just scroll up? It was described multiple times, including in the reply you are 'not responding to'. I've offered the solution to the problem which will continue to cause you heartache. There are dozens of similar threads spread across your other open and closed issues. If you need more detail, I'm happy to offer it, but if you just don't want to hear it, then I can't assist any further.
Where did you describe a specific algo?
Even if you did... you aren't @DominicMe. Let him speak for his needs. Don't hijack this thread.
@trapexit I'll be honest this whole experience put me off, I am giving up on this at least for now. I tried to explain things many times but every reply I get seems like it is a reply to a different thing entirely. I don't think it's productive to reply without taking the time to comprehend what was said. Maybe it was me who wasn't clear but I don't think I can express myself much clearer.
And you think I've enjoyed spending several hours going back and forth not getting straight answers to my questions?
You bring up multiple things. Stick to 1 at a time.
You said eplfs was broken but didn't provide information such that I could help. I've shown you how to test. I don't know what else to do. It works as described. The example shows that. Your df output shows no drives filled completely.
As for policies... you never told me what you wanted. "middle ground" isn't an algorithm. Just tell me what the general process for selecting a branch would be. That's all I want. I'll consider any algo you give me... I just need an algo.
You said eplfs was broken but didn't provide information such that I could help. I've shown you how to test. I don't know what else to do. It works as described. The example shows that. Your df output shows no drives filled completely.
disk1 is filled beyond 50GB limit, but that's moot now as I made changes while testing and cannot replicate that issue specifically. Now its that every path preserving policy results in out of space error for the specific dir I am testing. From what you told me I think that's expected behavior and also unusable.
As for policies... you never told me what you wanted. "middle ground" isn't an algorithm. Just tell me what the general process for selecting a branch would be. That's all I want.
I did suggest early on a fallback option but you implied/said it is technically complicated. This is why I refrained from making specific suggestions. You know how mergerfs works so you are in much better position to come up with a solution to a problem. My initial suggestion is very simple:
Have for example eplfs as primary/default policy as is now but have an optional secondary/fallback policy if first is for whatever reason unwriteable.
Another way could be an extra variation for all policies to be either strict or not like below:
eplfs (strict, existing path, least free space) - Of all the branches on which the relative path exists choose the drive with the least free space.
beplfs (best effort,existing path, least free space) - Same as eplfs but proceeds to the next drive/branch if the current produces an error until there are no more drives with existing path.
disk1 is filled beyond 50GB limit
The limit is about when the file is created. If you have 50GB + 1 byte free and then write a 2 byte file you'll write the file and next create that drive will be ignored. That's where moveonenospc can come in. If you had 50GB + 1 byte and you end up writing 50GB + 2 bytes... on that last write it will error. mergerfs would then try to find somewhere else for it to live. mergerfs doesn't know how much data is going to be written nor does it make sense to try to pay attention to how much data is free on a drive as the call to find out is expensive relatively speaking.
I did suggest early on a fallback option but you implied/said it is technically complicated. This is why I refrained from making specific suggestions. You know how mergerfs works so you are in much better position to come up with a solution to a problem. My initial suggestion is very simple:
Yes. It's complicated because there is no such thing as external options to policies. I'd have to build that out. Maybe it wasn't clear but I was trying to suggest that I would have to build out such an idea... allowing for policies to have external options. Even if only 1. "or multiply the number of policies by the number of settings. epmfs-lfs epmfs-lus... etc."
I asked what your preferences for policy was because I can easily add a bespoke policy that has "a fallback." I can not easily add a new options to policies feature. I understand exactly what you mean but there are many ways to accomplish the same thing. Telling me general solutions doesn't help me decide. The topic of fallback options has been considered. The question is what is the tradeoffs of implementations and the scale at which they'd be used. I have a lot of stuff I'm working on and have to balance all of it out. The "user story" is much better for me than proposals on solutions. One is descriptive and one is prescriptive. Prescriptive conversations should come after descriptive.
I can make general failover of policy a feature. The complication comes into understanding what are people looking for. Will that solve their problems or is it just a work around to something else? The directory walk back @malventano is talking about that is different from this fallback idea and in some ways they overlap/conflict.
When you start suggesting all the permutations that could be... it gets complicated even just understanding how someone would articulate the settings. Getting to a point where it might make more sense to completely remove hard coded policies. I certainly don't want to build what amounts to be a whole DSL for policies.
I asked what your preferences for policy was because I can easily add a bespoke policy that has "a fallback."
Does that mean it would be single policy like epmfs modified with fallback? If so I think that's too limited and clunky as all/most policies would be desirable to have with fallback.
The "user story" is much better for me than proposals on solutions.
Its really quite simple, I want path preservation but not have to manually create dirs and deal with out of space errors. I do not need strict path preservation, I am ok with mergerfs creating new dir if all existing paths are full or read-only.
DSL?
From user perspective I think the best solution would be to have make-your-own-policy type system. You would have each option option separately and build it into a policy. For example I want existing path, least used space and fallback so I just use epluslfs (eplus+lfs). Ideally you could even add more than 2 policies so it would continue to the next one if previous resulted in an error.
Does that mean it would be single policy like epmfs modified with fallback? If so I think that's too limited and clunky as all/most policies would be desirable to have with fallback.
Yes. But like I said the alternative is a lot more work. Hence trying to understand what the actual need is and not just taking implementation suggestions.
Its really quite simple, I want path preservation but not have to manually create dirs and deal with out of space errors. I do not need strict path preservation, I am ok with mergerfs creating new dir if all existing paths are full or read-only.
Path preservation means paths are preserved. If they aren't preserved it's not path preservation. What you want is something different. What you want will happen now'ish with ff or lfs. There is more to your ask than just what you described. Maybe you don't see it that way but there is. There is a big difference in end behavior between having a fallback non-pp policy attached to a pp policy and the walk back strategy that @malventano suggests.
DSL?
Domain Specific Language
From user perspective I think the best solution would be to have make-your-own-policy type system.
That's what I was talking about. A Domain Specific Language. That's non-trivial. And am I really going to expect users to become programmers? Do people really want to make everything slower so they can make their own in Lua or something? Having a fallback isn't "building your own". It's an option to an existing policy or it's an option to mergerfs where on error it tries another. Either way it's drastically different from "make-your-own".
It doesn't sound to me you're really trying to control the layout so much as limit spread/accesses. Having a ep policy fallback to a non-ep won't preserve paths. It'll just create paths on whatever the non-ep drive is at that point in time when the drive fills vs every time. To what degree that impacts the system depends on the path created. If you create /foo then that's on thing... but if it's /foo/bar/baz/a/b/c/d/e/f/g then now anything in that path will be placed on that drive instead of only something in /foo. @malventano asked for walkback which still requires some amount of human management. Which is required if you want to control based on information that is totally arbitrary to the system where stuff lives. Like... my TV shows are always 4 levels deep or movies 2 or music 3 so I want to keep music limited to these 2 drives and tv to those 4, etc. That setup still means someday you'll get an error. It just didn't require making as many directories. Some people do this already by using different mkdir policies but that's not the same as what a walk back does.
General description
I have total of 10 disks with most being at or close to minfreespace limit of 50gb. I am using path preserving policy as outlined below because I want to minimize disk access which causes wake up and delay in data retrieval due to spin up.
Expected behavior
I was under the impression that when the drive with existing path is full even with path preserving policy new path would be created on a free disk. Is this not how it should behave?
Actual behavior
If I copy a file to directory x "disk2" is where it is copied despite it being at or below minfreespace limit of 50gb and disk0, disk1, disk3 and others having directory x as well as disk3 having 300gb+ free space. I am able to copy directly to disk3 so it should not be a permission issue. I also ran mergerfs.dedup and mergerfs.fsck scripts with no change.
Precise steps to reproduce the behavior
Copy file to an existing directory x with config below. Note I also tried epff and epall policies and some files will still get copied to disks that are over 50gb limit until they are completely full. What am I doing wrong?
System information
Please provide as much of the following information as possible:
mergerfs -V
mergerfs version: 2.29.0 FUSE library version: 2.9.7-mergerfs_2.29.0 fusermount version: 2.9.7-mergerfs_2.29.0 using FUSE kernel interface version 7.31cat /etc/fstab
or the command line argumentsLABEL=disk0 /mnt/disk0 ext4 defaults,nofail 0 2 LABEL=disk1 /mnt/disk1 ext4 defaults,nofail 0 2 LABEL=disk2 /mnt/disk2 ext4 defaults,nofail 0 2 LABEL=disk3 /mnt/disk3 ext4 defaults,nofail 0 2 LABEL=disk4 /mnt/disk4 ext4 defaults,nofail 0 2 LABEL=disk5 /mnt/disk5 ext4 defaults,nofail 0 2 LABEL=disk6 /mnt/disk6 ext4 defaults,nofail 0 2 LABEL=disk7 /mnt/disk7 ext4 defaults,nofail 0 2 LABEL=disk8 /mnt/disk8 ext4 defaults,nofail 0 2 LABEL=disk9 /mnt/disk9 ext4 defaults,nofail 0 2
/mnt/disk* /mnt/pool fuse.mergerfs defaults,allow_other,direct_io,use_ino,fsname=mergerfs,minfreespace=50G,category.create=eplfs,moveonenospc=true 0 0
Filesystem Size Used Avail Use% Mounted on devtmpfs 3.4G 0 3.4G 0% /dev tmpfs 3.4G 20K 3.4G 1% /dev/shm tmpfs 3.4G 1.5M 3.4G 1% /run /dev/nvme0n1p3 901G 84G 771G 10% / tmpfs 3.4G 44K 3.4G 1% /tmp /dev/nvme0n1p2 488M 220M 233M 49% /boot mergerfs 105T 97T 8.1T 93% /mnt/pool backing 105T 97T 8.1T 93% /mnt/backing /dev/nvme0n1p1 256M 8.4M 248M 4% /boot/efi /dev/sda1 962G 702G 261G 73% /mnt/downloads /dev/sdi1 7.3T 7.2T 61G 100% /mnt/disk3 /dev/sdh1 11T 11T 23G 100% /mnt/disk1 /dev/sdf1 7.3T 7.2T 54G 100% /mnt/disk9 /dev/sdk1 11T 11T 51G 100% /mnt/disk2 /dev/sdc1 15T 14T 639G 96% /mnt/disk4 /dev/sdj1 13T 13T 277G 98% /mnt/disk0 /dev/sdd1 15T 7.7T 6.8T 54% /mnt/disk6 tmpfs 694M 0 694M 0% /run/user/1000 tmpfs 694M 0 694M 0% /run/user/420 /dev/sdg1 7.3T 7.2T 48G 100% /mnt/disk8 /dev/sdb1 13T 13T 108G 100% /mnt/disk5 /dev/sde1 7.3T 7.2T 49G 100% /mnt/disk7