scylladb / scylla-ccm

Cassandra Cluster Manager, modified for Scylla
Apache License 2.0
22 stars 66 forks source link

scylla_cluster,scylla_node: increase debug timeout for `watch_rest_for_alive` #480

Closed fruch closed 1 year ago

fruch commented 1 year ago

seems like the 360s default for debug for watch_rest_for_alive isn't enough as it was for watch_log_for_alive, making it 600s (10min)

Fix: #477

bhalevy commented 1 year ago

@fruch can you please test this change with update_cluster_layout_tests.py::TestLargeScaleCluster::test_add_many_nodes_under_load in debug mode?

fruch commented 1 year ago

@fruch can you please test this change with update_cluster_layout_tests.py::TestLargeScaleCluster::test_add_many_nodes_under_load in debug mode?

I've sent it on: https://jenkins.scylladb.com/view/staging/job/scylla-staging/job/fruch/job/new-dtest-pytest-parallel/327/

we'll need to check it out in few hours

fruch commented 1 year ago

@bhalevy

It passed: https://jenkins.scylladb.com/view/staging/job/scylla-staging/job/fruch/job/new-dtest-pytest-parallel/327/artifact/logs-long.debug.000/dtest-gw0.log/*view*/

even that I was a bit surprised it took only 11m (and start only 5 nodes)

bhalevy commented 1 year ago

@bhalevy

It passed: https://jenkins.scylladb.com/view/staging/job/scylla-staging/job/fruch/job/new-dtest-pytest-parallel/327/artifact/logs-long.debug.000/dtest-gw0.log/*view*/

even that I was a bit surprised it took only 11m (and start only 5 nodes)

It was changed in https://github.com/scylladb/scylla-dtest/pull/3268 to stop adding nodes once stress is complete since the point in running it in debug mode is to see that's everything is stable and doesn't trigger any sanitizer errors.