Closed aibsen closed 3 months ago
[ceph: root@sv-hdd-12-0 /]# ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 3.6 PiB 1.7 PiB 1.9 PiB 1.9 PiB 52.35
mds 20 TiB 19 TiB 905 GiB 905 GiB 4.34
ssd 151 TiB 58 TiB 93 TiB 93 TiB 61.37
TOTAL 3.8 PiB 1.8 PiB 2.0 PiB 2.0 PiB 52.45
--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
.mgr 1 1 615 MiB 257 1.8 GiB 0 17 TiB
volumes 2 4096 31 TiB 8.21M 93 TiB 64.70 17 TiB
images 3 32 92 GiB 11.88k 275 GiB 0.02 531 TiB
volumes-hdd 6 4096 566 TiB 149.49M 1.7 PiB 51.60 531 TiB
cephfs_data 8 4096 67 TiB 1.04G 202 TiB 11.28 531 TiB
cephfs_metadata 9 128 290 GiB 3.11M 870 GiB 4.37 6.2 TiB
cephfs_data_ec 10 32 0 B 0 0 B 0 25 TiB
.rgw.root 14 32 2.1 KiB 5 60 KiB 0 17 TiB
default.rgw.log 15 32 13 KiB 209 436 KiB 0 17 TiB
default.rgw.control 16 32 0 B 8 0 B 0 17 TiB
default.rgw.meta 17 8 107 KiB 313 2.2 MiB 0 17 TiB
default.rgw.buckets.index 18 8 13 MiB 198 39 MiB 0 17 TiB
default.rgw.buckets.data 19 32 16 GiB 85.32k 49 GiB 0 531 TiB
default.rgw.buckets.non-ec 20 32 105 KiB 1 328 KiB 0 531 TiB
Expanded:
(openstack-config) [stack@sv-admin-0 openstack-config]$ openstack share quota show RSP
+-----------------------+----------------------------------+
| Field | Value |
+-----------------------+----------------------------------+
| gigabytes | 184000 |
| id | 5b5102968e5347ad98676ea42b6519df |
| per_share_gigabytes | -1 |
| replica_gigabytes | 184000 |
| share_group_snapshots | 50 |
| share_groups | 50 |
| share_networks | 10 |
| share_replicas | 100 |
| shares | 50 |
| snapshot_gigabytes | 184000 |
| snapshots | 50 |
+-----------------------+----------------------------------+
Increase the size of the cephfs share in the RSP project named 'butler-data' by 150 TB. This is so we can transfer more of DP0.2 and test the ingestion process when increasing the data volume by an order of magnitude.