Open xemul opened 2 years ago
@xemul
what is the recommendation for such a case ? manually crafted io-properties.yaml
file ?
for example for 2TB gp3 with 16K IOPS ? https://aws.amazon.com/ebs/general-purpose/
disks:
- mountpoint: /var/lib/scylla
read_iops: 16000
read_bandwidth: 262607760
write_iops: 16000
write_bandwidth: 265524816
@xemul
what is the recommendation for such a case ? manually crafted
io-properties.yaml
file ?for example for 2TB gp3 with 16K IOPS ? https://aws.amazon.com/ebs/general-purpose/
disks: - mountpoint: /var/lib/scylla read_iops: 16000 read_bandwidth: 262607760 write_iops: 16000 write_bandwidth: 265524816
We have such a case, but it's keep failing cause of this iotune overshooting
@xemul what is the recommendation for such a case ? manually crafted
io-properties.yaml
file ? for example for 2TB gp3 with 16K IOPS ? https://aws.amazon.com/ebs/general-purpose/disks: - mountpoint: /var/lib/scylla read_iops: 16000 read_bandwidth: 262607760 write_iops: 16000 write_bandwidth: 265524816
We have such a case, but it's keep failing cause of this iotune overshooting
@xemul
what is the recommendation for such a case ? manually crafted
io-properties.yaml
file ?for example for 2TB gp3 with 16K IOPS ? https://aws.amazon.com/ebs/general-purpose/
disks: - mountpoint: /var/lib/scylla read_iops: 16000 read_bandwidth: 262607760 write_iops: 16000 write_bandwidth: 265524816
These numbers are burst-values, on sustained workload the drive won't work like that. I'd check it with fio. Well -- with diskplorer :D #1297 was supposed to facilitate that
@xemul what is the recommendation for such a case ? manually crafted
io-properties.yaml
file ? for example for 2TB gp3 with 16K IOPS ? https://aws.amazon.com/ebs/general-purpose/disks: - mountpoint: /var/lib/scylla read_iops: 16000 read_bandwidth: 262607760 write_iops: 16000 write_bandwidth: 265524816
These numbers are burst-values, on sustained workload the drive won't work like that. I'd check it with fio. Well -- with diskplorer :D #1297 was supposed to facilitate that
gp3 does not use burst though. That's why I was wondering if it's something with the node limits.
Closing as there is no interest in EBS. If you think differently please reopen.
Lack of interest by one user doesn't mean the bug is fixed.
In scylla issue #9906 there's such an io-properties.yaml file
however, the drive was created with default bandwidth, which is 128MB/s. This number also shows up if re-running iotune by hands