scylladb / scylla-bench

42 stars 34 forks source link

Fix possible memory leak due to worker goroutines going out of sync when reporting partial results #117

Closed piodul closed 1 year ago

piodul commented 1 year ago

scylla-bench spawns a bunch of worker goroutines that are responsible for executing CQL operations and a single collector goroutine that collects partial results from the workers. Each worker goroutine has its own separate channel that it uses to send partial results to the collector. The collector goroutine, in a loop, fetches a single partial result from each channel, merges them and prints.

There are some problems with this setup:

More specifically, the algorithm that the worker uses looks like this:

last_send_timestamp := now
loop:
    do_an_operation()
    if now - last_send_timestamp >= log_interval: # log_interval == 1s
        send_partial_results()
        last_send_timestamp := now

Notice that when the condition for sending partial results is met, more than one second might have elapsed. Because the loop just resets the last_send_timestamp to the current timestamp, the worker does not try to catch up the lag in any way.

Because the lag is dependent on operation latency - which is random from our POV - the difference between the worker with the lowest lag and the highest lag will keep on increasing. The collector takes a single value from each channel in each round, so it will be bottlenecked on the slowest worker. If the difference in lag between the slowest worker and some other worker is large enough, then the other worker's channel will be non-empty and will always contain a number of partial results proportional to the worker's lag.

Because of the above, the memory consumption may grow over time. In addition to this, results might become inaccurate because the collector will merge current results of the slow workers with outdated results from several seconds ago of the fast workers.

This PR fixes this in a very simple way: instead of resetting last_send_timestamp to the current timestamp, the last_send_timestamp is just advanced by the log interval. This will ensure that the worker goroutines try to keep synchronized with each other.

Might be related to: https://github.com/scylladb/scylla-bench/issues/112 This is the likely cause, although I don't have a definitive proof. In my one hour test I observed that the memory use looks correlated to the number of items sitting in the channels, and indeed was higher before the fix - but not much higher (by 100MB) and the number of items fluctuated instead of steadily growing. Perhaps a longer test, like in the original issue, would be needed to reproduce this.

fruch commented 1 year ago

@yarongilor

I've create a docker image with this one: https://github.com/scylladb/scylla-cluster-tests/pull/5616

can you run it with the case that was show the leak ?

yarongilor commented 1 year ago

@yarongilor

I've create a docker image with this one: scylladb/scylla-cluster-tests#5616

can you run it with the case that was show the leak ?

Running a test in: https://jenkins.scylladb.com/job/scylla-staging/job/Longevity_yaron/job/longevity-large-partition-4days-test-rq2/22/

It looks like the memory leak is not reproduced after ~ 20 hours - image