Closed a-hurst closed 2 years ago
Hmm, it's been a while since I touched any of the code here, but IIRC this is similar to the issues with btrfs which was solvable by just eliminating duplicate mountpoint entries for the same device path/filesystem. However, it looks as though all of the entries are going to be needed here - the total usage (based on the Avail
column) is all of the volumes added together:
I'll have a proper look at this later tonight. By the way, you can probably set a config file specifying just the /
mountpoint as a workaround for the time being.
Quick edit to add (before I forget!) - perhaps this calculation could be taken advantage of, and we use the available space to calculate the usage? I'm not sure if this may be problematic on other filesystems, I'll have to check.
So, it appears APFS has containers which macOS gives device paths to that look like disks - it seems disk1
and disk3
are two APFS containers so inside them all volumes get the full free space but have separate used space.
Unfortunately I'm not sure the idea of using the available space to calculate used space will work since we still need some way to distinguish between APFS and say an HFS partition on any disk.
The only way I can really think to do this would be to try and execute diskutil list
(or diskutil apfs list
?) and parse its output for the filesystem type to discard APFS volumes as "duplicates" of their containers while keeping their used space. @HorlogeSkynet any thoughts on that? Since it may be a little clumsy, it's kind of like an ifdef macOS
escape hatch :laughing:
Thanks for digging this up again @ingrinder !
I'm not against parsing diskutil
output (JSON support ? 🙃), but here are my concerns about doing this, for this entry or any other one :
system_profiler
IIRC), so if we really need to do this, we might have to implement disk caching ;if
depending on the platform (if only M1 and onwards are concerned, we should take this into account too) ;Bye, thanks again 👋
I've written a few commits as the start of an approach to this issue with the diskutil
approach - luckily the Python standard library has a nice parser for Apple's property list formats. Looking at the tests they still need modifying to pass when run in macOS too.
@a-hurst would it be possible for you to test the latest commit on bug/macos-disk? I don't have a mac so I can't say for definite whether using diskutil in this way is particularly slow or not (...or even whether it actually works at all, in fact).
@HorlogeSkynet I had a look at what neofetch and screenfetch do - it looks like they solved it by only ever considering /
by default, unless you specify other directories. Perhaps this isn't such a bad idea after all...
Since macOS has supported APFS for a while then this bug would also be present on any mac using a disk with more than one APFS volume on it, so unfortunately it needs applying to all macs rather than arm only.
Anyway, let me know what you think, whether it's worth approaching it like this or maybe going the neofetch/screenfetch way.
Hey @ingrinder ! Many thanks for your findings and your work 👌
@a-hurst any chance you've been able to try the bug/macos-disk
branch ? 🙂
If it actually fixes your issue, I will give a proper review to #116.
Thank to both of you, bye 👋
@HorlogeSkynet Yep it seems to work now, thanks! Not sure where the "invalid field" warning is coming from, but it happens on the master branch of @ingrinder's fork too so I don't think it's a result of the patch.
Many thanks for your reply, I'll handle #116. About the warning, this may be one of two added bugs on master, targeting the next version. I'll take a look at this too. Bye 👋
Closing here as #116 is ready for review/merge :pray:
Released on Thursday as v4.14.0.0 @a-hurst ! Thanks, bye :wave:
Describe the bug
On my M1 MacBook Pro, the latest archey4 from pip reports my total disk space as more than 5x what it actually is (5.4TiB instead of 1 TB).
From a quick skim of the relevant source it looks like archey is written to automatically add together the sizes of different volumes, and since
df
on macOS appears to report the same disk multiple times (presumably a quirk of how macOS handles logical volumes) my guess is archey is accidentally summing these all together. Note that I haven't done anything fancy with my partitioning, my drive is exactly as it was formatted/partitioned from the factory.Expected behavior
I expected it to correctly report the size of my internal drive.
Screenshots
Here's what
df
returns on my Mac. As you can see, there's a lot of exact duplicates of the same drive. I guess the simplest fix would be to ignore all disks that aren't mounted as/
or don't have mount points in/Volumes
(where normal internal/external partitions are mounted)?Thanks in advance!
Environment
Additional context