intel-cloud / cosbench

a benchmark tool for cloud object storage service
Other
573 stars 242 forks source link

if the workers>drivers the read success is low, when workers increase the probability of success is reduce #348

Closed JYang1986 closed 7 years ago

JYang1986 commented 7 years ago

When I use the COSBench test the following configuration file , use Amazon S3 , with 3 drivers web server is Minio

Driver Name URL IsAlive Link 1 driver1 http://192.168.1.250:18088/driver view details 2 driver2 http://192.168.1.251:18088/driver view details 3 driver3 http://192.168.1.252:18088/driver view details

`<?xml version="1.0" encoding="UTF-8" ?>

` when the workers<=drivers (only test 2,3 drivers), the read success is 100% Op-Type Op-Count Byte-Count Avg-ResTime Avg-ProcTime Throughput Bandwidth Succ-Ratio op1: init -write 0 ops 0 B N/A N/A 0 op/s 0 B/S N/A op1: prepare -write 1 kops 256 MB 1020.49 ms 962.31 ms 200.85 op/s 51.42 MB/S 100% op1: read 33.32 kops 8.53 GB 10.78 ms 7.38 ms 277.68 op/s 71.09 MB/S **100%** op1: cleanup -delete 1 kops 0 B 105.45 ms 105.45 ms 2179.4 op/s 0 B/S 100% op1: dispose -delete 0 ops 0 B N/A N/A 0 op/s 0 B/S N/A ID Name Works Workers Op-Info State Link w54-s1-init init 1 wks 200 wkrs init completed view details w54-s2-prepare prepare 1 wks 200 wkrs prepare completed view details w54-s3-get main 1 wks 3 wkrs read completed view details w54-s4-cleanup cleanup 1 wks 200 wkrs cleanup completed view details w54-s5-dispose dispose 1 wks 200 wkrs dispose completed view details if the workers>drivers the read success is low when workers increase the probability of success is reduce workers=5 ,drivers=3 read success is **69.4%** op1: read 29.12 kops 7.45 GB 13.73 ms 9.79 ms 242.64 op/s 62.11 MB/S **69.4%** w55-s3-get main 1 wks 5 wkrs read failed workers=9 ,drivers=3 read success is **53.16%** op1: read 8.53 kops 2.18 GB 82.51 ms 74.12 ms 71.08 op/s 18.2 MB/S 53.16% workers=30,drivers=3 read success is **28.12%** op1: read 5.33 kops 1.36 GB 288.26 ms 269.73 ms 44.6 op/s 11.42 MB/S 28.12% ## my COSBench conf **controller:** [controller] drivers = 3 log_level = INFO log_file = log/system.log archive_dir = archive [driver1] name = driver1 url = http://192.168.1.250:18088/driver [driver2] name = driver2 url = http://192.168.1.251:18088/driver [driver3] name = driver3 url = http://192.168.1.252:18088/driver **driver:** [driver] name=127.0.0.1:18088 url=http://127.0.0.1:18088/driver ## my Environment [root@node0 ~]# cat /etc/redhat-release CentOS Linux release 7.2.1511 (Core) [root@node76 ~]# uname -a Linux node76 3.10.0-514.6.1.el7.x86_64 #1 SMP Wed Jan 18 13:06:36 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux [root@node0 ~]# Minio 03.16.2017 master compile the version ` [root@node0 minio-20170316-124606]# ./minio server minio_test/ -C . Migration from version ‘13’ to ‘14’ completed successfully. Endpoint: http://192.168.1.250:9000 http://192.168.1.100:9000 http://127.0.0.1:9000 AccessKey: test SecretKey: test Region: us-east-1 SQS ARNs: Browser Access: http://192.168.1.250:9000 http://192.168.1.100:9000 http://127.0.0.1:9000 Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide $ mc config host add myminio http://192.168.1.250:9000 FusionNAS FusionNAS Object API (Amazon S3 compatible): Go: https://docs.minio.io/docs/golang-client-quickstart-guide Java: https://docs.minio.io/docs/java-client-quickstart-guide Python: https://docs.minio.io/docs/python-client-quickstart-guide JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide Drive Capacity: 404 GiB Free, 440 GiB Total ` ext4 filesystem [root@node0 minio_test]# pwd /home/minio-20170316-124606/minio_test [root@node0 minio_test]# mount /dev/sda2 on /home type ext4 (rw,relatime,data=ordered) [root@node0 minio_test]# df -h /dev/sda2 441G 15G 404G 4% /home cosbench 0.4.2.c4
Wilhelmshaven commented 7 years ago

According to my test experience on ceph, when COSBench read success rate is not 100%, the problem varies, but all can be rooted in one reason that it's just because read task is reading non-exist object. Also, I suggest you paste the workload in $COSBenchDIR/archive/$your_work/workload-config.xml instead your upload XML, because it contains full configuration. For your reference.

JYang1986 commented 7 years ago

but when I set 100 dirver for 100 workers, one dirver one work is read 100%. OK, I will try later. BTW dou you know how to use cosbench write 10G file , the chunk flag doesn't work. - -!

Wilhelmshaven commented 7 years ago

but when I set 100 dirver for 100 workers, one dirver one work is read 100%. OK, I will try later. BTW dou you know how to use cosbench write 10G file , the chunk flag doesn't work. - -!

Never tested that... My collegue told me that he tested the chunk flag, it's useless. The file can be uploaded successfully, but can't be seem in bucket. In that case. download would failed.

We ran large file test by using boto or boto3, using our own scripts.