ucsf-wynton / wynton-website-hpc

The Official Wynton HPC User Website
https://wynton.ucsf.edu/hpc/
2 stars 14 forks source link

SPECS: Update total home and group storage sizes #109

Open HenrikBengtsson opened 1 year ago

HenrikBengtsson commented 1 year ago

The storage-size info on https://wynton.ucsf.edu/hpc/about/specs.html#overview is outdated. Specifically, make sure that:

is up-to-date. The file to update is https://github.com/ucsf-wynton/wynton-website-hpc/blob/master/docs/_data/specs.yml.

HenrikBengtsson commented 1 year ago

This information can be queried automatically using:

df -P --si -h /wynton/{home,group,scratch}/
Filesystem        Size Used Available Use% Mounted on
beegfs_nodev#11   770T 336T      434T  44% /wynton/home
beegfs_nodev#12   6.5P 4.5P        2P  70% /wynton/group
beegfs_nodev#10   703T 421T      282T  60% /wynton/scratch
HenrikBengtsson commented 1 year ago

Website updated, but this should be automated. Also, the above captures the size of any storage under /wynton/protected/. @ellestad , how much "home", "group", and "scratch" is under /wynton/protected/.

ellestad commented 1 year ago

Not sure what you mean, /wynton/protected/[home,group,scratch,project] use the same storage pools as /wynton/[home,group,scratch]. A df on pdev1 looks like the following:

[eje@pdev1 ~]$ df -h
Filesystem        Size Used Available Use% Mounted on
devtmpfs          125G    0      125G   0% /dev
tmpfs             125G  52K      125G   1% /dev/shm
tmpfs             125G 4.1G      121G   4% /run
tmpfs             125G    0      125G   0% /sys/fs/cgroup
/dev/sda1          31G  22G      9.4G  71% /
/dev/sdc1         1.1T 244G      873G  22% /usr/local
/dev/sdb1         1.1T  14G      1.1T   2% /scratch
/dev/sda3           8G 807M      7.2G  10% /tmp
/dev/sda2          15G 2.1G       13G  14% /var
beegfs_nodev#10   703T 419T      283T  60% /wynton/scratch
beegfs_nodev#11   770T 335T      434T  44% /wynton/home
beegfs_nodev#12   6.5P 4.5P        2P  70% /wynton/group
[...]
HenrikBengtsson commented 1 year ago

So, calling, for instance:

$ df -P --si -h /wynton/home/
Filesystem        Size Used Available Use% Mounted on
beegfs_nodev#11   770T 336T      434T  44% /wynton/home

that total size of 770 TB is the total amount of available storage for both /wynton/home/ and /wynton/protected/home/? I'm trying to find out a way for a non-privileged cron job to update the online specs.

ellestad commented 1 year ago

"that total size of 770 TB is the total amount of available storage for both /wynton/home/ and /wynton/protected/home/?"

I don't know how that custom python df works. My assumption would be, it is really running some beegfs-ctl function which gets the space usage for the pools and reformats the output. You'd have to ask @gregcouch. As I understand it, /wynton/protected/group, /wynton/protected/project, and /wynton/group all use the same storage pool (12), so the python df against /wynton/group should show the total usage for all three. But, again, double check with Greg.

HenrikBengtsson commented 1 year ago

Thanks.

I don't know how that custom python df works.

Oh, I completely forgot df is a home-made version, i.e.

$ df | grep /wynton
beegfs_nodev#10                                       755291494809  447430315212  307861179596  59% /wynton/scratch
beegfs_nodev#11                                       827642924236  361148553625  466494370611  44% /wynton/home
beegfs_nodev#12                                      7133330826854 4986466258124 2146864568729  70% /wynton/group

vs

$ /bin/df | grep /wynton
beegfs_nodev                                       8716265818112 5795076119552 2921189698560  67% /wynton

Do, you happen to know to list the total size of a BeeGFS storage pool; can beegfs-ctl be used for that?

HenrikBengtsson commented 1 year ago

@gregcouch, can you help answer this?

gregcouch commented 1 year ago

You can use beegfs-ctl --listtargets --nodetype=storage --storagepools --spaceinfo. But I recommend using the /usr/local/bin/df output since that gives you the total size already in "human readable" values and shows where the storage group is mounted, except it really isn't mounted, our df just pretends it is. And yes, /wynton/home and /wynton/protected/home share the same storage pool. Likewise for scratch. /wynton/group is shared with /wynton/protected/group and /wynton/protected/project.

HenrikBengtsson commented 1 year ago

Next actions, based on Greg's answers:

https://github.com/ucsf-wynton/wynton-website-hpc/blob/542eec1c10dc1222f6c2e2aea5dddb3f9fc8a497/docs/_data/specs.yml#L29-L33

https://github.com/ucsf-wynton/wynton-website-hpc/blob/542eec1c10dc1222f6c2e2aea5dddb3f9fc8a497/docs/Makefile#L73

HenrikBengtsson commented 1 year ago

You can use beegfs-ctl --listtargets --nodetype=storage --storagepools --spaceinfo. But I recommend using the /usr/local/bin/df output since that gives you the total size already in "human readable" values and shows where the storage group is mounted, except it really isn't mounted, our df just pretends it is. And yes, /wynton/home and /wynton/protected/home share the same storage pool. Likewise for scratch. /wynton/group is shared with /wynton/protected/group and /wynton/protected/project.

@gregc, the way I read your answer is that I should be able to use df to get the size of "/wynton/home+/wynton/protected/home" and "wynton/group+/wynton/protected/group+/wynton/protected/project". However, I does not look like df make such distinctions:

$ /usr/local/bin/df -h /wynton/home 2> /dev/null                                                                                    
Filesystem     Size Used Available Use% Mounted on
beegfs_nodev    10P 5.8P      4.2P  59% /wynton

$ /usr/local/bin/df -h /wynton/group 2> /dev/null
Filesystem     Size Used Available Use% Mounted on
beegfs_nodev    10P 5.8P      4.2P  59% /wynton

It's all lumped together, reporting on the total /wynton BeeGFS storage.