Closed vpuliyal closed 1 year ago
cr.yaml file
# cat cr.yaml
apiVersion: ripsaw.cloudbulldozer.io/v1alpha1
kind: Benchmark
metadata:
name: fio-benchmark-example
namespace: benchmark-operator
spec:
# where elastic search is running
#elasticsearch:
# url: http://my.elasticsearch.server:80
# verify_cert: false
# parallel: false
# clustername: myk8scluster
# test_user: ripsaw
workload:
name: "fio_distributed"
args:
image: quay.io/multi-arch/cloud-bulldozer:fio
# if true, do large sequential write to preallocate volume before using
prefill: true
# for compressed volume uncomment the next line and make the cmp_bs same as bs
# prefill_bs: 8KiB
# number of times each test
samples: 1
# number of fio pods generating workload
servers: 1
# put all fio pods on this server
pin_server: 'worker1.ocp411.local.pvmpsp.com'
# test types, see fio documentation
jobs:
- write
- read
# I/O request sizes (also called block size)
bs:
- 4KiB
- 64KiB
# how many fio processes per pod
numjobs:
- 1
# with libaio ioengine, number of in-flight requests per process
iodepth: 4
# how long to run read tests, this is TOO SHORT DURATION
read_runtime: 15
# how long to run write tests, this is TOO SHORT DURATION
write_runtime: 15
# don't start measuring until this many seconds pass, for reads
read_ramp_time: 5
# don't start measuring until this many seconds pass, for writes
write_ramp_time: 5
# size of file to access
filesize: 2GiB
# interval between i/o stat samples in milliseconds
log_sample_rate: 3000
#storageclass: rook-ceph-block
#storagesize: 5Gi
# use drop_cache_kernel to have set of labeled nodes drop kernel buffer cache before each sample
#drop_cache_kernel: False
# use drop_cache_rook_ceph to have Ceph OSDs drop their cache before each sample
#drop_cache_rook_ceph: False
# increase this if you want fio to run for more than 1 hour without being terminated by K8S
#job_timeout: 3600
#######################################
# EXPERT AREA - MODIFY WITH CAUTION #
#######################################
# global_overrides:
# NOTE: Dropping caches as per this example can only be done if the
# fio server is running in a privileged pod
# - exec_prerun=bash -c 'sync && echo 3 > /proc/sys/vm/drop_caches'
job_params:
- jobname_match: write
params:
- fsync_on_close=1
- create_on_open=1
- runtime={{ workload_args.write_runtime }}
- ramp_time={{ workload_args.write_ramp_time }}
- jobname_match: read
params:
- time_based=1
- runtime={{ workload_args.read_runtime }}
- ramp_time={{ workload_args.read_ramp_time }}
- jobname_match: rw
params:
- rwmixread=50
- time_based=1
- runtime={{ workload_args.read_runtime }}
- ramp_time={{ workload_args.read_ramp_time }}
- jobname_match: readwrite
params:
- rwmixread=50
- time_based=1
- runtime={{ workload_args.read_runtime }}
- ramp_time={{ workload_args.read_ramp_time }}
- jobname_match: randread
params:
- time_based=1
- runtime={{ workload_args.read_runtime }}
- ramp_time={{ workload_args.read_ramp_time }}
- jobname_match: randwrite
params:
- time_based=1
- runtime={{ workload_args.write_runtime }}
- ramp_time={{ workload_args.write_ramp_time }}
- jobname_match: randrw
params:
- time_based=1
- runtime={{ workload_args.write_runtime }}
- ramp_time={{ workload_args.write_ramp_time }}
# - jobname_match: <search_string>
# params:
# - key=value
# oc get pods
NAME READY STATUS RESTARTS AGE
benchmark-controller-manager-6bfd6bf7f4-t5p8n 2/2 Running 7 (2d5h ago) 14d
fio-client-09d1a971-khvpt 0/1 Completed 0 65m
fio-prefill-09d1a971-hf8j6 0/1 Completed 0 65m
fio-server-1-benchmark-09d1a971 1/1 Running 0 65m
The log output is pretty clear to me:
2023-01-31T14:41:11Z - INFO - MainProcess - run_snafu: Not connected to Elasticsearch
The above means that snafu won't connect to ElasticSearch to index results, since no endpoint was passed in the CR you pasted (The ES config is actually commented)
Even I enabled ES config in yaml file, still I'm seeing same error
apiVersion: ripsaw.cloudbulldozer.io/v1alpha1
kind: Benchmark
metadata:
name: fio-benchmark-example
namespace: benchmark-operator
spec:
# where elastic search is running
elasticsearch:
url: http://my.elasticsearch.server:80
verify_cert: false
parallel: false
clustername: myk8scluster
test_user: ripsaw
workload:
name: "fio_distributed"
args:
image: quay.io/multi-arch/cloud-bulldozer:fio
# if true, do large sequential write to preallocate volume before using
prefill: true
.
.
.
This is the output with ES enabled
2023-01-31T18:08:02Z - INFO - MainProcess - run_snafu: logging level is INFO
2023-01-31T18:08:02Z - INFO - MainProcess - _load_benchmarks: Successfully imported 1 benchmark modules: uperf
2023-01-31T18:08:02Z - INFO - MainProcess - _load_benchmarks: Failed to import 0 benchmark modules:
2023-01-31T18:08:02Z - INFO - MainProcess - run_snafu: Using elasticsearch server with host: http://my.elasticsearch.server:80
2023-01-31T18:08:02Z - INFO - MainProcess - run_snafu: Using index prefix for ES: ripsaw-fio
2023-01-31T18:08:02Z - INFO - MainProcess - run_snafu: Turning off TLS certificate verification
2023-01-31T18:08:02Z - INFO - MainProcess - run_snafu: Connected to the elasticsearch cluster with info as follows:
2023-01-31T18:08:02Z - WARNING - MainProcess - run_snafu: Elasticsearch connection caused an exception: ConnectionError(<urllib3.connection.HTTPConnection object at 0x7fff990eda90>: Failed to establish a new connection: [Errno -2] Name or service not known) caused by: NewConnectionError(<urllib3.connection.HTTPConnection object at 0x7fff990eda90>: Failed to establish a new connection: [Errno -2] Name or service not known)
2023-01-31T18:08:02Z - INFO - MainProcess - run_snafu: Not connected to Elasticsearch
2023-01-31T18:08:02Z - INFO - MainProcess - wrapper_factory: identified fio as the benchmark wrapper
2023-01-31T18:08:02Z - INFO - MainProcess - trigger_fio: Executing fio --client=/tmp/host/hosts /tmp/fiod-3855e879-0786-5700-abd6-1b76f171d1df/fiojob-read-64KiB-1/1/read/fiojob --output-format=json --output=/tmp/fiod-3855e879-0786-5700-abd6-1b76f171d1df/fiojob-read-64KiB-1/1/read/fio-result.json
2023-01-31T18:08:22Z - INFO - MainProcess - trigger_fio: fio has successfully finished sample 1 executing for jobname read and results are in the dir /tmp/fiod-3855e879-0786-5700-abd6-1b76f171d1df/fiojob-read-64KiB-1/1/read
2023-01-31T18:08:22Z - INFO - MainProcess - run_snafu: Duration of execution - 0:00:20, with total size of 7200 bytes
run finished
@rsevilla87 did I miss anything ?
@rsevilla87 did I miss anything ?
This is not a bug, the elasticsearch server you're using in the CR is incorrect (it's taken from the examples). The log indicates that couldn't connect to the given ElasticSearch instance, and after that it disabled indexing. If you want to index the benchmark results you've to set up a ElasticSearch instance.
@rsevilla87 ok. Can you please provide me the steps to setup a Elastic Search instance ? Thanks in advance
@rsevilla87 ok. Can you please provide me the steps to setup a Elastic Search instance ? Thanks in advance
There're plenty of documentation in the internet to set up a ElasticSearch database. That's out of scope of this repository
@rsevilla87 thanks
@rsevilla87 How to run FIO without Elastic Search ?
@rsevilla87 How to run FIO without Elastic Search ?
You can use a configuration similar to the one you posted initially, which doesn't have any elasticsearch configuration.
I disabled Elasticsearch in cr.yaml
file and still I'm seeing Not connected to Elasticsearch
message.
Do I need to disable ES somewhere other than cr.yaml
file ?
# cat cr.yaml
apiVersion: ripsaw.cloudbulldozer.io/v1alpha1
kind: Benchmark
metadata:
name: fio-benchmark-example
namespace: benchmark-operator
spec:
# where elastic search is running
#elasticsearch:
# url: http://my.elasticsearch.server:80
# verify_cert: false
# parallel: false
# clustername: myk8scluster
# test_user: ripsaw
workload:
name: "fio_distributed"
args:
image: quay.io/multi-arch/cloud-bulldozer:fio
# if true, do large sequential write to preallocate volume before using
prefill: true
# for compressed volume uncomment the next line and make the cmp_bs same as bs
# prefill_bs: 8KiB
# number of times each test
samples: 1
# number of fio pods generating workload
servers: 1
##############################FIO Client output ###########################
#oc logs fio-client-4b76edd3-9b29g
[read]
rw=read
time_based=1
runtime=15
ramp_time=5
2023-02-04T10:43:25Z - INFO - MainProcess - run_snafu: logging level is INFO
2023-02-04T10:43:25Z - INFO - MainProcess - _load_benchmarks: Successfully imported 1 benchmark modules: uperf
2023-02-04T10:43:25Z - INFO - MainProcess - _load_benchmarks: Failed to import 0 benchmark modules:
2023-02-04T10:43:25Z - INFO - MainProcess - run_snafu: Not connected to Elasticsearch
2023-02-04T10:43:25Z - INFO - MainProcess - wrapper_factory: identified fio as the benchmark wrapper
2023-02-04T10:43:25Z - INFO - MainProcess - trigger_fio: Executing fio --client=/tmp/host/hosts /tmp/fiod-4b76edd3-dc1d-59f1-81cf-6bd43e725868/fiojob-read-64KiB-1/1/read/fiojob --output-format=json --output=/tmp/fiod-4b76edd3-dc1d-59f1-81cf-6bd43e725868/fiojob-read-64KiB-1/1/read/fio-result.json
@rsevilla87 do I need to disable Elasticsearch other than cr.yaml file ?
Not seeing any output after running fio and seeing run_snafu: Not connected to Elasticsearch .
Did I miss anything in cr.yaml file ?