Closed wuhongsong closed 2 years ago
curvefs version: branch perftest 3 server 3 metaserver(each on one ssd) mataserve_log_leverl = 0; client_log_level = 0; cto = false The result is there is a little drop. 444MB -> 411MB
enableSumInDir: false
enableSumInDir: true
as you can see, write start after last write end may wait a moment.
Actually the write performance drops when enable enableSumInDir. IOPS dropped from about 2900 to 1900.
The reason why this problem has not been found before is fio filesize=10G and runtime=180, so there are a lot of overwrites behind and overwrites will not update summary xattr.
After test when add some metrics, the reason for the performance drop is asynchronous RPC requests need to guarantee sequentiality and the next request need to wait until the last request completes.
Describe the bug (描述bug)
To Reproduce (复现方法)
Expected behavior (期望行为)
Versions (各种版本) curve-fuse: perftest-055b65c
container_image: harbor.cloud.netease.com/curve/curvefs:perftest-055b65c container_pid: host
log_dir: /data/logs/client client.loglevel: 8 fuseClient.cto: false data_dir: /data/curvefs/etcd/clienttaos
diskCache.avgFlushBytes: 0 diskCache.burstFlushBytes: 0 diskCache.avgReadFileBytes: 0 diskCache.diskCacheType: 2 enableSumInDir: false mdsOpt.rpcRetryOpt.addrs: 10.166.16.60:6700,10.166.16.61:6700,10.166.16.62:6700 core_dir: /core s3.throttle.iopsTotalLimit: 2500 s3.throttle.bpsTotalMB: 5000 diskCache.threads: 15 s3.chunkFlushThreads: 15
diskCache.maxUsableSpaceBytes: 214748364800
Additional context/screenshots (更多上下文/截图)