Closed FliesLikeABrick closed 6 years ago
I will be opening a PR shortly with a proposed solution for this issue.
Results (before -> after for [protocol] (req/s)): 800kbyte/s -> 145 kbyte/s for HTTP (3650 req/s) 1200kbyte/s -> 25 kbyte/s for HTTPS (600req/s)
Notice that the requests per second are appreciably higher than before due to lower CPU contention and context switching on the single-vCPU instance used (m3.medium). HTTP performance to the same target 3ms from us-east-1 increased from 2500 HTTP req/s to 3650. HTTPS performance increased from 400-500 req/s up to 630 req/s.
This issue was much worse for HTTPS due to the SSL verbosity from apachebench. This change represents a 98% reduction in output carried over SSH.
PR #195 opened:
Closing - resolved by PR #195 merge.
BWMG operates by opening an SSH session to each instance to collect verbose data from apachebench which is operating in verbose mode "-v 3", to collect fine-grained timing and response status data to be aggregated by BWMG itself.
This verbose output includes a large number of lines which are not needed by BWMG for statistics generation, especially when attacking HTTPS URLs (due to additional SSL/TLS debug sent to stderr and stdout). stderr is entirely discarded within bees.py anyway.
For an instance performing 2500 HTTP req/s, this is approximately 800KByte/s sent over SSH. For instances performing 400-600 HTTPS req/s this is approximately 1.2 megabytes/s.
In a test with 100 instances, this adds up: 80 megabytes/s of raw data to support 250kreq/s of load testing, or 600 megabytes/s (5 gigabits/s) to support 250kreq/s of HTTPS testing.
Unfortunately, this has a few negative side-effects:
Some kind of instance-side filtering of stdout and stderr to only the necessary data would alleviate these scaling limitations, reduce cost, and likely increase the requests/s that a given instance can perform when not maintaining this housekeeping (especially single-vCPU instances)