zeebe-io / zeebe-performance-test

1 stars 0 forks source link

Measure impact of network latency with and without batch processing #2

Closed lenaschoenburg closed 1 year ago

lenaschoenburg commented 1 year ago

Test with Zeebe 8.2.0-SNAPSHOT For a useful test, enable network message compression. Disable batch processing and compare with baseline: https://github.com/zeebe-io/zeebe-performance-test/issues/6

lenaschoenburg commented 1 year ago

First variant: https://github.com/zeebe-io/zeebe-performance-test/actions/runs/4305023680 Targets 150 PI/s

gh workflow run measure.yaml \
  -f name=os-bp-disabled \
  -f chaos=network-latency-5  \
  -f helm-arguments="--set camunda-platform.zeebe.image.tag=SNAPSHOT --set camunda-platform.zeebe-gateway.image.tag=SNAPSHOT --set zeebe.config.zeebe.broker.cluster.messageCompression=SNAPPY --set zeebe.config.zeebe.gateway.cluster.messageCompression=SNAPPY --set zeebe.config.zeebe.broker.processing.maxCommandsInBatch=1"
lenaschoenburg commented 1 year ago

Second variant: https://github.com/zeebe-io/zeebe-performance-test/actions/runs/4305491117 Targets 90 PI/s

gh workflow run measure.yaml \
  -f name=os-bp-disabled \
  -f chaos=network-latency-5  \
  -f helm-arguments="--set camunda-platform.zeebe.image.tag=SNAPSHOT --set camunda-platform.zeebe-gateway.image.tag=SNAPSHOT --set zeebe.config.zeebe.broker.cluster.messageCompression=SNAPPY --set zeebe.config.zeebe.gateway.cluster.messageCompression=SNAPPY --set zeebe.config.zeebe.broker.processing.maxCommandsInBatch=1 --set starter.rate=90"

Compare with: https://github.com/zeebe-io/zeebe-performance-test/actions/runs/4305578042

gh workflow run measure.yaml \
  -f name=os-bp-enabled \
  -f chaos=network-latency-5  \
  -f helm-arguments="--set camunda-platform.zeebe.image.tag=SNAPSHOT --set camunda-platform.zeebe-gateway.image.tag=SNAPSHOT --set zeebe.config.zeebe.broker.cluster.messageCompression=SNAPPY --set zeebe.config.zeebe.gateway.cluster.messageCompression=SNAPPY --set zeebe.config.zeebe.broker.processing.maxCommandsInBatch=100 --set starter.rate=90"
lenaschoenburg commented 1 year ago

Third variant: https://github.com/zeebe-io/zeebe-performance-test/actions/runs/4305734358 Targets 75PI/s with 35ms network latency.

gh workflow run measure.yaml \
  -f name=os-bp-disabled-35 \
  -f chaos=network-latency-35  \
  -f helm-arguments="--set camunda-platform.zeebe.image.tag=SNAPSHOT --set camunda-platform.zeebe-gateway.image.tag=SNAPSHOT --set zeebe.config.zeebe.broker.cluster.messageCompression=SNAPPY --set zeebe.config.zeebe.gateway.cluster.messageCompression=SNAPPY --set zeebe.config.zeebe.broker.processing.maxCommandsInBatch=1 --set starter.rate=75"

Compare with: https://github.com/zeebe-io/zeebe-performance-test/actions/runs/4305735064

gh workflow run measure.yaml \
  -f name=os-bp-enabled-35 \
  -f chaos=network-latency-35  \
  -f helm-arguments="--set camunda-platform.zeebe.image.tag=SNAPSHOT --set camunda-platform.zeebe-gateway.image.tag=SNAPSHOT --set zeebe.config.zeebe.broker.cluster.messageCompression=SNAPPY --set zeebe.config.zeebe.gateway.cluster.messageCompression=SNAPPY --set zeebe.config.zeebe.broker.processing.maxCommandsInBatch=100 --set starter.rate=75"
lenaschoenburg commented 1 year ago

Fourth variant: https://github.com/zeebe-io/zeebe-performance-test/actions/runs/4305888445 Targets 75PI/s with 35ms network latency and AIMD configured for 1000ms request timeout

gh workflow run measure.yaml \
  -f name=os-bp-disabled-35-aimd \
  -f chaos=network-latency-35  \
  -f helm-arguments="--set camunda-platform.zeebe.image.tag=SNAPSHOT --set camunda-platform.zeebe-gateway.image.tag=SNAPSHOT --set zeebe.config.zeebe.broker.cluster.messageCompression=SNAPPY --set zeebe.config.zeebe.gateway.cluster.messageCompression=SNAPPY --set zeebe.config.zeebe.broker.processing.maxCommandsInBatch=1 --set zeebe.config.zeebe.broker.backpressure.aimd.requestTimeout=1000ms --set starter.rate=75"

Compare with: https://github.com/zeebe-io/zeebe-performance-test/actions/runs/4305899924

gh workflow run measure.yaml \
  -f name=os-bp-enabled-35-aimd \
  -f chaos=network-latency-35  \
  -f helm-arguments="--set camunda-platform.zeebe.image.tag=SNAPSHOT --set camunda-platform.zeebe-gateway.image.tag=SNAPSHOT --set zeebe.config.zeebe.broker.cluster.messageCompression=SNAPPY --set zeebe.config.zeebe.gateway.cluster.messageCompression=SNAPPY --set zeebe.config.zeebe.broker.processing.maxCommandsInBatch=100 --set zeebe.config.zeebe.broker.backpressure.aimd.requestTimeout=1000ms --set starter.rate=75"
lenaschoenburg commented 1 year ago

Fifth variant: https://github.com/zeebe-io/zeebe-performance-test/actions/runs/4314492602 Targets 75PI/s with 15ms network latency and AIMD configured for 1000ms request timeout

gh workflow run measure.yaml \
  -f name=os-bp-disabled-15 \
  -f chaos=network-latency-15  \
  -f helm-arguments="--set camunda-platform.zeebe.image.tag=SNAPSHOT --set camunda-platform.zeebe-gateway.image.tag=SNAPSHOT --set zeebe.config.zeebe.broker.cluster.messageCompression=SNAPPY --set zeebe.config.zeebe.gateway.cluster.messageCompression=SNAPPY --set zeebe.config.zeebe.broker.processing.maxCommandsInBatch=1 --set zeebe.config.zeebe.broker.backpressure.aimd.requestTimeout=1000ms --set starter.rate=75"

Compare with: https://github.com/zeebe-io/zeebe-performance-test/actions/runs/4314495413

gh workflow run measure.yaml \
  -f name=os-bp-enabled-15 \
  -f chaos=network-latency-15  \
  -f helm-arguments="--set camunda-platform.zeebe.image.tag=SNAPSHOT --set camunda-platform.zeebe-gateway.image.tag=SNAPSHOT --set zeebe.config.zeebe.broker.cluster.messageCompression=SNAPPY --set zeebe.config.zeebe.gateway.cluster.messageCompression=SNAPPY --set zeebe.config.zeebe.broker.processing.maxCommandsInBatch=100 --set zeebe.config.zeebe.broker.backpressure.aimd.requestTimeout=1000ms --set starter.rate=75"