Closed jeroenmaelbrancke closed 5 years ago
I think this is exactly the expected performance:
fragments on the hdds are 2MB this means that optimistically we can fetch 50 fragments/s => 20ms latency
according to fio the avg latency=21388.24 us, which is only slightly more than 20ms
fwiw, in this case performance would be better if no fragment cache were configured ... then the proxy would do partial reads from the hdds too, while now it will always fetch full fragments.
@jeroenmaelbrancke isn't the partial read from HDD exactly the path you wanted to test by setting the asd-slowlyness?
When all the asds of the cache backend are online we could read 4530KiB/s. From the moment we introduce latency (asd-set-slowness) to the cache asds the read performance drops to 187KiB/s.
I'm not sure if i configured something wrong but this is not expected i suppose.
The environment is still online and the proxies are on debug level. ips: 10.100.189.31-33
packages:
setup: 1 vpool 1 cachebackend 1 hddbackend
cachebackend: 1,0,1,1 fragment size: 16 MiB No compression No encryption
hddbackend: 1,2,2,2 fragment_size: 2MiB no compression aes-ctr-256
vpool: 2 proxies write buffer: 1GiB SCO size: 4 MiB Cluster size: 4 KiB volume write buffer: 512 MiB DTL: sync Transport: TCP Fragment cache: read/write quota: 10GiB No block cache
added following to the volumedriver config: backend_interface_partial_read_retry_interval_msecs: 10 backend_interface_partial_read_timeout_msecs: 250
fio test:
without slowness:
with slowness set to 0.5,0.5