Closed bhalevy closed 1 year ago
This PR replaces #437
$ git diff c4a5215 bef253b
diff --git a/ccmlib/scylla_node.py b/ccmlib/scylla_node.py
index 9fd540d..bf38ff1 100644
--- a/ccmlib/scylla_node.py
+++ b/ccmlib/scylla_node.py
@@ -590,7 +590,8 @@ class ScyllaNode(Node):
self._smp = int(v)
elif k != '--memory':
args.append(k)
- args.append(v)
+ if v:
+ args.append(v)
args.extend(translated_args)
Taking it for a ride with a bigger bunch of tests: https://jenkins.scylladb.com/view/staging/job/scylla-staging/job/fruch/job/new-dtest-pytest-parallel/281/
Taking it for a ride with a bigger bunch of tests: https://jenkins.scylladb.com/view/staging/job/scylla-staging/job/fruch/job/new-dtest-pytest-parallel/281/
all test run o.k. (some did fail but not related to this, as far as I can see) I think we should wait for the scylla-pkg fix, so it would get a proper next run.
Taking it for a ride with a bigger bunch of tests: https://jenkins.scylladb.com/view/staging/job/scylla-staging/job/fruch/job/new-dtest-pytest-parallel/281/
all test run o.k. (some did fail but not related to this, as far as I can see) I think we should wait for the scylla-pkg fix, so it would get a proper next run.
yes. thanks!
@bhalevy seem like there a need to rebase this one
rebased
We'd like to use the values provided in the SCYLLA_EXT_OPTS environment variable as defaults, but also derive self._mem_mb_per_cpu from them.
Then, if a test passes --smp, without --memory in jvm_args we should calculate --memory from self._mem_mb_per_cpu * _smp.
Otherwise, the default --memory parameter given in SCYLLA_EXT_OPTS could be too small if the test uses more shards than the default.
See for example https://jenkins.scylladb.com/view/master/job/scylla-master/job/dtest-daily-release/216/artifact/logs-full.release.018/dtest-gw2.log that times out when bootstrapping new nodes with smp=8 and memory=1024M takes increasingly longer due to the immense memory pressure.
https://jenkins.scylladb.com/view/master/job/scylla-master/job/dtest-daily-release/216/artifact/logs-full.release.018/1678691413194_lwt_schema_modification_test.py%3A%3ATestLWTSchemaModification%3A%3Atest_lwt_load/node1.log There are lots and lots of memory pressure indications, in the form of
This apparently started happening after scylladb/scylladb@020483aa594c0978bd3696c0d2b316fb77db5b2e That changed:
Increasing the overall memory reservation for wasm and bringing scylla to its knees with 1024M total for 8 shards.