When using aggregated results with the compare command in OSB, compared results are much shorter than when using two normal test executions. This is because the comparison between each tests' tasks are not being shown in comparisons using aggregated results.
To reproduce
opensearch-benchmark compare --baseline={aggregated test result ID} --contender={other test ID}
------------------------------------------------------
_______ __ _____
/ ____(_)___ ____ _/ / / ___/_________ ________
/ /_ / / __ \/ __ `/ / \__ \/ ___/ __ \/ ___/ _ \
/ __/ / / / / / /_/ / / ___/ / /__/ /_/ / / / __/
/_/ /_/_/ /_/\__,_/_/ /____/\___/\____/_/ \___/
------------------------------------------------------
| Metric | Task | Baseline | Contender | Diff | Unit |
|--------------------------------------------------------------:|-------:|------------:|------------:|--------:|-------:|
| Cumulative indexing time of primary shards | | 0.001925 | 0.00253333 | 0.00061 | min |
| Min cumulative indexing time across primary shard | | 0 | 0 | 0 | min |
| Median cumulative indexing time across primary shard | | 0.000175 | 0.000216667 | 4e-05 | min |
| Max cumulative indexing time across primary shard | | 0.00109167 | 0.00156667 | 0.00047 | min |
| Cumulative indexing throttle time of primary shards | | 0 | 0 | 0 | min |
| Min cumulative indexing throttle time across primary shard | | 0 | 0 | 0 | min |
| Median cumulative indexing throttle time across primary shard | | 0 | 0 | 0 | min |
| Max cumulative indexing throttle time across primary shard | | 0 | 0 | 0 | min |
| Cumulative merge time of primary shards | | 0 | 0 | 0 | min |
| Cumulative merge count of primary shards | | 0 | 0 | 0 | |
| Min cumulative merge time across primary shard | | 0 | 0 | 0 | min |
| Median cumulative merge time across primary shard | | 0 | 0 | 0 | min |
| Max cumulative merge time across primary shard | | 0 | 0 | 0 | min |
| Cumulative merge throttle time of primary shards | | 0 | 0 | 0 | min |
| Min cumulative merge throttle time across primary shard | | 0 | 0 | 0 | min |
| Median cumulative merge throttle time across primary shard | | 0 | 0 | 0 | min |
| Max cumulative merge throttle time across primary shard | | 0 | 0 | 0 | min |
| Cumulative refresh time of primary shards | | 0.00175833 | 0.002 | 0.00024 | min |
| Cumulative refresh count of primary shards | | 165.5 | 167 | 1.5 | |
| Min cumulative refresh time across primary shard | | 0 | 0 | 0 | min |
| Median cumulative refresh time across primary shard | | 0.000225 | 0.000283333 | 6e-05 | min |
| Max cumulative refresh time across primary shard | | 0.000633333 | 0.000766667 | 0.00013 | min |
| Cumulative flush time of primary shards | | 0.000316667 | 0.000316667 | 0 | min |
| Cumulative flush count of primary shards | | 2 | 2 | 0 | |
| Min cumulative flush time across primary shard | | 0 | 0 | 0 | min |
| Median cumulative flush time across primary shard | | 0 | 0 | 0 | min |
| Max cumulative flush time across primary shard | | 0.000316667 | 0.000316667 | 0 | min |
| Total Young Gen GC time | | 0.001 | 0 | -0.001 | s |
| Total Young Gen GC count | | 0.5 | 0 | -0.5 | |
| Total Old Gen GC time | | 0 | 0 | 0 | s |
| Total Old Gen GC count | | 0 | 0 | 0 | |
| Store size | | 0.000407811 | 0.000447421 | 4e-05 | GB |
| Translog size | | 3.58559e-07 | 3.58559e-07 | 0 | GB |
| Heap used for segments | | 0 | 0 | 0 | MB |
| Heap used for doc values | | 0 | 0 | 0 | MB |
| Heap used for terms | | 0 | 0 | 0 | MB |
| Heap used for norms | | 0 | 0 | 0 | MB |
| Heap used for points | | 0 | 0 | 0 | MB |
| Heap used for stored fields | | 0 | 0 | 0 | MB |
| Segment count | | 14 | 18 | 4 | |
Expected behavior
When comparing between two normal test executions: opensearch-benchmark compare --baseline=5723f3d5-eba7-40ad-a269-d8c57a562358 --contender=5723f3d5-eba7-40ad-a269-d8c57a562358
We get a comparison of the individual tasks' results as well:
------------------------------------------------------
_______ __ _____
/ ____(_)___ ____ _/ / / ___/_________ ________
/ /_ / / __ \/ __ `/ / \__ \/ ___/ __ \/ ___/ _ \
/ __/ / / / / / /_/ / / ___/ / /__/ /_/ / / / __/
/_/ /_/_/ /_/\__,_/_/ /____/\___/\____/_/ \___/
------------------------------------------------------
| Metric | Task | Baseline | Contender | Diff | Unit |
|--------------------------------------------------------------:|-------------------------------:|------------:|------------:|-------:|--------:|
| Cumulative indexing time of primary shards | | 0.00253333 | 0.00253333 | 0 | min |
| Min cumulative indexing time across primary shard | | 0 | 0 | 0 | min |
| Median cumulative indexing time across primary shard | | 0.000216667 | 0.000216667 | 0 | min |
| Max cumulative indexing time across primary shard | | 0.00156667 | 0.00156667 | 0 | min |
| Cumulative indexing throttle time of primary shards | | 0 | 0 | 0 | min |
| Min cumulative indexing throttle time across primary shard | | 0 | 0 | 0 | min |
| Median cumulative indexing throttle time across primary shard | | 0 | 0 | 0 | min |
| Max cumulative indexing throttle time across primary shard | | 0 | 0 | 0 | min |
| Cumulative merge time of primary shards | | 0 | 0 | 0 | min |
| Cumulative merge count of primary shards | | 0 | 0 | 0 | |
| Min cumulative merge time across primary shard | | 0 | 0 | 0 | min |
| Median cumulative merge time across primary shard | | 0 | 0 | 0 | min |
| Max cumulative merge time across primary shard | | 0 | 0 | 0 | min |
| Cumulative merge throttle time of primary shards | | 0 | 0 | 0 | min |
| Min cumulative merge throttle time across primary shard | | 0 | 0 | 0 | min |
| Median cumulative merge throttle time across primary shard | | 0 | 0 | 0 | min |
| Max cumulative merge throttle time across primary shard | | 0 | 0 | 0 | min |
| Cumulative refresh time of primary shards | | 0.002 | 0.002 | 0 | min |
| Cumulative refresh count of primary shards | | 167 | 167 | 0 | |
| Min cumulative refresh time across primary shard | | 0 | 0 | 0 | min |
| Median cumulative refresh time across primary shard | | 0.000283333 | 0.000283333 | 0 | min |
| Max cumulative refresh time across primary shard | | 0.000766667 | 0.000766667 | 0 | min |
| Cumulative flush time of primary shards | | 0.000316667 | 0.000316667 | 0 | min |
| Cumulative flush count of primary shards | | 2 | 2 | 0 | |
| Min cumulative flush time across primary shard | | 0 | 0 | 0 | min |
| Median cumulative flush time across primary shard | | 0 | 0 | 0 | min |
| Max cumulative flush time across primary shard | | 0.000316667 | 0.000316667 | 0 | min |
| Total Young Gen GC time | | 0 | 0 | 0 | s |
| Total Young Gen GC count | | 0 | 0 | 0 | |
| Total Old Gen GC time | | 0 | 0 | 0 | s |
| Total Old Gen GC count | | 0 | 0 | 0 | |
| Store size | | 0.000447421 | 0.000447421 | 0 | GB |
| Translog size | | 3.58559e-07 | 3.58559e-07 | 0 | GB |
| Heap used for segments | | 0 | 0 | 0 | MB |
| Heap used for doc values | | 0 | 0 | 0 | MB |
| Heap used for terms | | 0 | 0 | 0 | MB |
| Heap used for norms | | 0 | 0 | 0 | MB |
| Heap used for points | | 0 | 0 | 0 | MB |
| Heap used for stored fields | | 0 | 0 | 0 | MB |
| Segment count | | 18 | 18 | 0 | |
| Min Throughput | index-append | 23802.9 | 23802.9 | 0 | docs/s |
| Mean Throughput | index-append | 23802.9 | 23802.9 | 0 | docs/s |
| Median Throughput | index-append | 23802.9 | 23802.9 | 0 | docs/s |
| Max Throughput | index-append | 23802.9 | 23802.9 | 0 | docs/s |
| 50th percentile latency | index-append | 26.0536 | 26.0536 | 0 | ms |
| 100th percentile latency | index-append | 46.665 | 46.665 | 0 | ms |
| 50th percentile service time | index-append | 26.0536 | 26.0536 | 0 | ms |
| 100th percentile service time | index-append | 46.665 | 46.665 | 0 | ms |
| error rate | index-append | 0 | 0 | 0 | % |
| Min Throughput | wait-until-merges-finish | 153.379 | 153.379 | 0 | ops/s |
| Mean Throughput | wait-until-merges-finish | 153.379 | 153.379 | 0 | ops/s |
| Median Throughput | wait-until-merges-finish | 153.379 | 153.379 | 0 | ops/s |
| Max Throughput | wait-until-merges-finish | 153.379 | 153.379 | 0 | ops/s |
| 100th percentile latency | wait-until-merges-finish | 6.07785 | 6.07785 | 0 | ms |
| 100th percentile service time | wait-until-merges-finish | 6.07785 | 6.07785 | 0 | ms |
| error rate | wait-until-merges-finish | 0 | 0 | 0 | % |
| Min Throughput | index-stats | 231.634 | 231.634 | 0 | ops/s |
| Mean Throughput | index-stats | 231.634 | 231.634 | 0 | ops/s |
| Median Throughput | index-stats | 231.634 | 231.634 | 0 | ops/s |
| Max Throughput | index-stats | 231.634 | 231.634 | 0 | ops/s |
| 100th percentile latency | index-stats | 6.72001 | 6.72001 | 0 | ms |
| 100th percentile service time | index-stats | 1.85629 | 1.85629 | 0 | ms |
| error rate | index-stats | 0 | 0 | 0 | % |
| Min Throughput | node-stats | 150.006 | 150.006 | 0 | ops/s |
| Mean Throughput | node-stats | 150.006 | 150.006 | 0 | ops/s |
| Median Throughput | node-stats | 150.006 | 150.006 | 0 | ops/s |
| Max Throughput | node-stats | 150.006 | 150.006 | 0 | ops/s |
| 100th percentile latency | node-stats | 8.94777 | 8.94777 | 0 | ms |
| 100th percentile service time | node-stats | 1.91986 | 1.91986 | 0 | ms |
| error rate | node-stats | 0 | 0 | 0 | % |
| Min Throughput | default | 215.512 | 215.512 | 0 | ops/s |
| Mean Throughput | default | 215.512 | 215.512 | 0 | ops/s |
| Median Throughput | default | 215.512 | 215.512 | 0 | ops/s |
| Max Throughput | default | 215.512 | 215.512 | 0 | ops/s |
| 100th percentile latency | default | 6.58587 | 6.58587 | 0 | ms |
| 100th percentile service time | default | 1.7306 | 1.7306 | 0 | ms |
| error rate | default | 0 | 0 | 0 | % |
| Min Throughput | term | 243.367 | 243.367 | 0 | ops/s |
| Mean Throughput | term | 243.367 | 243.367 | 0 | ops/s |
| Median Throughput | term | 243.367 | 243.367 | 0 | ops/s |
| Max Throughput | term | 243.367 | 243.367 | 0 | ops/s |
| 100th percentile latency | term | 6.07598 | 6.07598 | 0 | ms |
| 100th percentile service time | term | 1.67992 | 1.67992 | 0 | ms |
| error rate | term | 0 | 0 | 0 | % |
| Min Throughput | phrase | 247.978 | 247.978 | 0 | ops/s |
| Mean Throughput | phrase | 247.978 | 247.978 | 0 | ops/s |
| Median Throughput | phrase | 247.978 | 247.978 | 0 | ops/s |
| Max Throughput | phrase | 247.978 | 247.978 | 0 | ops/s |
| 100th percentile latency | phrase | 6.35388 | 6.35388 | 0 | ms |
| 100th percentile service time | phrase | 2.02308 | 2.02308 | 0 | ms |
| error rate | phrase | 0 | 0 | 0 | % |
| Min Throughput | country_agg_uncached | 160.253 | 160.253 | 0 | ops/s |
| Mean Throughput | country_agg_uncached | 160.253 | 160.253 | 0 | ops/s |
| Median Throughput | country_agg_uncached | 160.253 | 160.253 | 0 | ops/s |
| Max Throughput | country_agg_uncached | 160.253 | 160.253 | 0 | ops/s |
| 100th percentile latency | country_agg_uncached | 8.28035 | 8.28035 | 0 | ms |
| 100th percentile service time | country_agg_uncached | 1.79187 | 1.79187 | 0 | ms |
| error rate | country_agg_uncached | 0 | 0 | 0 | % |
| Min Throughput | country_agg_cached | 202.111 | 202.111 | 0 | ops/s |
| Mean Throughput | country_agg_cached | 202.111 | 202.111 | 0 | ops/s |
| Median Throughput | country_agg_cached | 202.111 | 202.111 | 0 | ops/s |
| Max Throughput | country_agg_cached | 202.111 | 202.111 | 0 | ops/s |
| 100th percentile latency | country_agg_cached | 7.42567 | 7.42567 | 0 | ms |
| 100th percentile service time | country_agg_cached | 2.22462 | 2.22462 | 0 | ms |
| error rate | country_agg_cached | 0 | 0 | 0 | % |
| Min Throughput | scroll | 137.707 | 137.707 | 0 | pages/s |
| Mean Throughput | scroll | 137.707 | 137.707 | 0 | pages/s |
| Median Throughput | scroll | 137.707 | 137.707 | 0 | pages/s |
| Max Throughput | scroll | 137.707 | 137.707 | 0 | pages/s |
| 100th percentile latency | scroll | 48.4007 | 48.4007 | 0 | ms |
| 100th percentile service time | scroll | 32.9603 | 32.9603 | 0 | ms |
| error rate | scroll | 0 | 0 | 0 | % |
| Min Throughput | expression | 194.618 | 194.618 | 0 | ops/s |
| Mean Throughput | expression | 194.618 | 194.618 | 0 | ops/s |
| Median Throughput | expression | 194.618 | 194.618 | 0 | ops/s |
| Max Throughput | expression | 194.618 | 194.618 | 0 | ops/s |
| 100th percentile latency | expression | 7.38153 | 7.38153 | 0 | ms |
| 100th percentile service time | expression | 2.03181 | 2.03181 | 0 | ms |
| error rate | expression | 0 | 0 | 0 | % |
| Min Throughput | painless_static | 237.281 | 237.281 | 0 | ops/s |
| Mean Throughput | painless_static | 237.281 | 237.281 | 0 | ops/s |
| Median Throughput | painless_static | 237.281 | 237.281 | 0 | ops/s |
| Max Throughput | painless_static | 237.281 | 237.281 | 0 | ops/s |
| 100th percentile latency | painless_static | 6.77052 | 6.77052 | 0 | ms |
| 100th percentile service time | painless_static | 2.31758 | 2.31758 | 0 | ms |
| error rate | painless_static | 0 | 0 | 0 | % |
| Min Throughput | painless_dynamic | 190.248 | 190.248 | 0 | ops/s |
| Mean Throughput | painless_dynamic | 190.248 | 190.248 | 0 | ops/s |
| Median Throughput | painless_dynamic | 190.248 | 190.248 | 0 | ops/s |
| Max Throughput | painless_dynamic | 190.248 | 190.248 | 0 | ops/s |
| 100th percentile latency | painless_dynamic | 7.3897 | 7.3897 | 0 | ms |
| 100th percentile service time | painless_dynamic | 1.91802 | 1.91802 | 0 | ms |
| error rate | painless_dynamic | 0 | 0 | 0 | % |
| Min Throughput | decay_geo_gauss_function_score | 219.385 | 219.385 | 0 | ops/s |
| Mean Throughput | decay_geo_gauss_function_score | 219.385 | 219.385 | 0 | ops/s |
| Median Throughput | decay_geo_gauss_function_score | 219.385 | 219.385 | 0 | ops/s |
| Max Throughput | decay_geo_gauss_function_score | 219.385 | 219.385 | 0 | ops/s |
| 100th percentile latency | decay_geo_gauss_function_score | 7.02881 | 7.02881 | 0 | ms |
| 100th percentile service time | decay_geo_gauss_function_score | 2.20239 | 2.20239 | 0 | ms |
| error rate | decay_geo_gauss_function_score | 0 | 0 | 0 | % |
| Min Throughput | decay_geo_gauss_script_score | 233.075 | 233.075 | 0 | ops/s |
| Mean Throughput | decay_geo_gauss_script_score | 233.075 | 233.075 | 0 | ops/s |
| Median Throughput | decay_geo_gauss_script_score | 233.075 | 233.075 | 0 | ops/s |
| Max Throughput | decay_geo_gauss_script_score | 233.075 | 233.075 | 0 | ops/s |
| 100th percentile latency | decay_geo_gauss_script_score | 6.22341 | 6.22341 | 0 | ms |
| 100th percentile service time | decay_geo_gauss_script_score | 1.7143 | 1.7143 | 0 | ms |
| error rate | decay_geo_gauss_script_score | 0 | 0 | 0 | % |
| Min Throughput | field_value_function_score | 217.209 | 217.209 | 0 | ops/s |
| Mean Throughput | field_value_function_score | 217.209 | 217.209 | 0 | ops/s |
| Median Throughput | field_value_function_score | 217.209 | 217.209 | 0 | ops/s |
| Max Throughput | field_value_function_score | 217.209 | 217.209 | 0 | ops/s |
| 100th percentile latency | field_value_function_score | 7.16415 | 7.16415 | 0 | ms |
| 100th percentile service time | field_value_function_score | 2.30484 | 2.30484 | 0 | ms |
| error rate | field_value_function_score | 0 | 0 | 0 | % |
| Min Throughput | field_value_script_score | 203.968 | 203.968 | 0 | ops/s |
| Mean Throughput | field_value_script_score | 203.968 | 203.968 | 0 | ops/s |
| Median Throughput | field_value_script_score | 203.968 | 203.968 | 0 | ops/s |
| Max Throughput | field_value_script_score | 203.968 | 203.968 | 0 | ops/s |
| 100th percentile latency | field_value_script_score | 7.707 | 7.707 | 0 | ms |
| 100th percentile service time | field_value_script_score | 2.47611 | 2.47611 | 0 | ms |
| error rate | field_value_script_score | 0 | 0 | 0 | % |
| Min Throughput | large_terms | 5.83558 | 5.83558 | 0 | ops/s |
| Mean Throughput | large_terms | 5.83558 | 5.83558 | 0 | ops/s |
| Median Throughput | large_terms | 5.83558 | 5.83558 | 0 | ops/s |
| Max Throughput | large_terms | 5.83558 | 5.83558 | 0 | ops/s |
| 100th percentile latency | large_terms | 341.087 | 341.087 | 0 | ms |
| 100th percentile service time | large_terms | 163.416 | 163.416 | 0 | ms |
| error rate | large_terms | 0 | 0 | 0 | % |
| Min Throughput | large_filtered_terms | 9.94934 | 9.94934 | 0 | ops/s |
| Mean Throughput | large_filtered_terms | 9.94934 | 9.94934 | 0 | ops/s |
| Median Throughput | large_filtered_terms | 9.94934 | 9.94934 | 0 | ops/s |
| Max Throughput | large_filtered_terms | 9.94934 | 9.94934 | 0 | ops/s |
| 100th percentile latency | large_filtered_terms | 199.226 | 199.226 | 0 | ms |
| 100th percentile service time | large_filtered_terms | 92.0933 | 92.0933 | 0 | ms |
| error rate | large_filtered_terms | 0 | 0 | 0 | % |
| Min Throughput | large_prohibited_terms | 6.70228 | 6.70228 | 0 | ops/s |
| Mean Throughput | large_prohibited_terms | 6.70228 | 6.70228 | 0 | ops/s |
| Median Throughput | large_prohibited_terms | 6.70228 | 6.70228 | 0 | ops/s |
| Max Throughput | large_prohibited_terms | 6.70228 | 6.70228 | 0 | ops/s |
| 100th percentile latency | large_prohibited_terms | 297.81 | 297.81 | 0 | ms |
| 100th percentile service time | large_prohibited_terms | 142.001 | 142.001 | 0 | ms |
| error rate | large_prohibited_terms | 0 | 0 | 0 | % |
| Min Throughput | desc_sort_population | 157.479 | 157.479 | 0 | ops/s |
| Mean Throughput | desc_sort_population | 157.479 | 157.479 | 0 | ops/s |
| Median Throughput | desc_sort_population | 157.479 | 157.479 | 0 | ops/s |
| Max Throughput | desc_sort_population | 157.479 | 157.479 | 0 | ops/s |
| 100th percentile latency | desc_sort_population | 8.16706 | 8.16706 | 0 | ms |
| 100th percentile service time | desc_sort_population | 1.59852 | 1.59852 | 0 | ms |
| error rate | desc_sort_population | 0 | 0 | 0 | % |
| Min Throughput | asc_sort_population | 219.287 | 219.287 | 0 | ops/s |
| Mean Throughput | asc_sort_population | 219.287 | 219.287 | 0 | ops/s |
| Median Throughput | asc_sort_population | 219.287 | 219.287 | 0 | ops/s |
| Max Throughput | asc_sort_population | 219.287 | 219.287 | 0 | ops/s |
| 100th percentile latency | asc_sort_population | 6.64476 | 6.64476 | 0 | ms |
| 100th percentile service time | asc_sort_population | 1.77614 | 1.77614 | 0 | ms |
| error rate | asc_sort_population | 0 | 0 | 0 | % |
| Min Throughput | asc_sort_with_after_population | 258.804 | 258.804 | 0 | ops/s |
| Mean Throughput | asc_sort_with_after_population | 258.804 | 258.804 | 0 | ops/s |
| Median Throughput | asc_sort_with_after_population | 258.804 | 258.804 | 0 | ops/s |
| Max Throughput | asc_sort_with_after_population | 258.804 | 258.804 | 0 | ops/s |
| 100th percentile latency | asc_sort_with_after_population | 5.73799 | 5.73799 | 0 | ms |
| 100th percentile service time | asc_sort_with_after_population | 1.62802 | 1.62802 | 0 | ms |
| error rate | asc_sort_with_after_population | 0 | 0 | 0 | % |
| Min Throughput | desc_sort_geonameid | 190.486 | 190.486 | 0 | ops/s |
| Mean Throughput | desc_sort_geonameid | 190.486 | 190.486 | 0 | ops/s |
| Median Throughput | desc_sort_geonameid | 190.486 | 190.486 | 0 | ops/s |
| Max Throughput | desc_sort_geonameid | 190.486 | 190.486 | 0 | ops/s |
| 100th percentile latency | desc_sort_geonameid | 7.3633 | 7.3633 | 0 | ms |
| 100th percentile service time | desc_sort_geonameid | 1.81898 | 1.81898 | 0 | ms |
| error rate | desc_sort_geonameid | 0 | 0 | 0 | % |
| Min Throughput | desc_sort_with_after_geonameid | 151.868 | 151.868 | 0 | ops/s |
| Mean Throughput | desc_sort_with_after_geonameid | 151.868 | 151.868 | 0 | ops/s |
| Median Throughput | desc_sort_with_after_geonameid | 151.868 | 151.868 | 0 | ops/s |
| Max Throughput | desc_sort_with_after_geonameid | 151.868 | 151.868 | 0 | ops/s |
| 100th percentile latency | desc_sort_with_after_geonameid | 8.86865 | 8.86865 | 0 | ms |
| 100th percentile service time | desc_sort_with_after_geonameid | 1.97829 | 1.97829 | 0 | ms |
| error rate | desc_sort_with_after_geonameid | 0 | 0 | 0 | % |
| Min Throughput | asc_sort_geonameid | 229.945 | 229.945 | 0 | ops/s |
| Mean Throughput | asc_sort_geonameid | 229.945 | 229.945 | 0 | ops/s |
| Median Throughput | asc_sort_geonameid | 229.945 | 229.945 | 0 | ops/s |
| Max Throughput | asc_sort_geonameid | 229.945 | 229.945 | 0 | ops/s |
| 100th percentile latency | asc_sort_geonameid | 6.62479 | 6.62479 | 0 | ms |
| 100th percentile service time | asc_sort_geonameid | 2.03989 | 2.03989 | 0 | ms |
| error rate | asc_sort_geonameid | 0 | 0 | 0 | % |
| Min Throughput | asc_sort_with_after_geonameid | 196.326 | 196.326 | 0 | ops/s |
| Mean Throughput | asc_sort_with_after_geonameid | 196.326 | 196.326 | 0 | ops/s |
| Median Throughput | asc_sort_with_after_geonameid | 196.326 | 196.326 | 0 | ops/s |
| Max Throughput | asc_sort_with_after_geonameid | 196.326 | 196.326 | 0 | ops/s |
| 100th percentile latency | asc_sort_with_after_geonameid | 6.67651 | 6.67651 | 0 | ms |
| 100th percentile service time | asc_sort_with_after_geonameid | 1.36154 | 1.36154 | 0 | ms |
| error rate | asc_sort_with_after_geonameid | 0 | 0 | 0 | % |
Describe the bug
When using aggregated results with the
compare
command in OSB, compared results are much shorter than when using two normal test executions. This is because the comparison between each tests' tasks are not being shown in comparisons using aggregated results.To reproduce
opensearch-benchmark compare --baseline={aggregated test result ID} --contender={other test ID}
Expected behavior
When comparing between two normal test executions:
opensearch-benchmark compare --baseline=5723f3d5-eba7-40ad-a269-d8c57a562358 --contender=5723f3d5-eba7-40ad-a269-d8c57a562358
We get a comparison of the individual tasks' results as well: