Closed Vad1mo closed 6 years ago
Hi, I can't say exactly from where the performance draw back is coming from. From what I know it could come from various reason :
I assume that you run via docker-plugin, have you try via cli (not a plugin running in a container) ? https://github.com/sapk/docker-volume-gluster#legacy-plugin-installation This will permit to eliminate any limitation imposed by gluster plugin runnning in a container.
I am not sure what happened with my first test, but I did a retry today with all the different variations. All the results are now in the same range. If you want you can share that info with the community.
-v /mnt/gv0/:/mnt/gv0
Test Scenario:
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=gv0-plugin --filename=gv0-native --directory=/mnt/plugin-gv0/ --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
Bind Mount
gv0-native: Laying out IO file(s) (1 file(s) / 4096MB)
Jobs: 1 (f=1): [m(1)] [100.0% done] [66104KB/21908KB/0KB /s] [16.6K/5477/0 iops] [eta 00m:00s]
gv0-native: (groupid=0, jobs=1): err= 0: pid=566: Thu Dec 7 22:55:45 2017
read : io=3071.7MB, bw=65519KB/s, iops=16379, runt= 48007msec
write: io=1024.4MB, bw=21849KB/s, iops=5462, runt= 48007msec
cpu : usr=8.96%, sys=48.35%, ctx=1184441, majf=0, minf=8
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued : total=r=786347/w=262229/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
READ: io=3071.7MB, aggrb=65519KB/s, minb=65519KB/s, maxb=65519KB/s, mint=48007msec, maxt=48007msec
WRITE: io=1024.4MB, aggrb=21849KB/s, minb=21849KB/s, maxb=21849KB/s, mint=48007msec, maxt=48007msec
root@27643d5b64e0:/mnt# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=gv0-plugin --filename=gv0-native --directory=/mnt/ --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
gv0/ gv0-native plugin-gv0/
Old School Plugin usage
gv0-plugin: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.2.10
Starting 1 process
Jobs: 1 (f=1): [m(1)] [100.0% done] [54708KB/18584KB/0KB /s] [13.7K/4646/0 iops] [eta 00m:00s]
gv0-plugin: (groupid=0, jobs=1): err= 0: pid=577: Thu Dec 7 23:01:32 2017
read : io=3071.7MB, bw=65857KB/s, iops=16464, runt= 47761msec
write: io=1024.4MB, bw=21962KB/s, iops=5490, runt= 47761msec
cpu : usr=9.98%, sys=47.28%, ctx=1183727, majf=0, minf=8
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued : total=r=786347/w=262229/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
READ: io=3071.7MB, aggrb=65856KB/s, minb=65856KB/s, maxb=65856KB/s, mint=47761msec, maxt=47761msec
WRITE: io=1024.4MB, aggrb=21961KB/s, minb=21961KB/s, maxb=21961KB/s, mint=47761msec, maxt=47761msec
New Style Plugin
gv0-new-plugin-style: Laying out IO file(s) (1 file(s) / 4096MB)
Jobs: 1 (f=1): [m(1)] [100.0% done] [64236KB/21320KB/0KB /s] [16.6K/5330/0 iops] [eta 00m:00s]
gv0-new-plugin-style: (groupid=0, jobs=1): err= 0: pid=555: Thu Dec 7 23:19:45 2017
read : io=3071.7MB, bw=65578KB/s, iops=16394, runt= 47964msec
write: io=1024.4MB, bw=21869KB/s, iops=5467, runt= 47964msec
cpu : usr=9.88%, sys=46.06%, ctx=1184805, majf=0, minf=8
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued : total=r=786347/w=262229/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
READ: io=3071.7MB, aggrb=65578KB/s, minb=65578KB/s, maxb=65578KB/s, mint=47964msec, maxt=47964msec
WRITE: io=1024.4MB, aggrb=21868KB/s, minb=21868KB/s, maxb=21868KB/s, mint=47964msec, maxt=47964msec
Thanks for the insigth,
I have referenced this issue in the readme for insigth.
I quickly did a performance test:
For the reference, I did the same test but this time with an mounted local gluster host volume.
The difference is 68MB/s to 5MB/s so it is more or less a factor 10 difference between the two results.
Do you have an idea if and how the drive can be improved so it is on par with fuse mounted glusterfs?