Open gecube opened 2 years ago
yeah, I guess there is some filesystem overhead, and we should raise the default
The issue is still very much live
The issue is still very much live
The issue is still very much live
The patch was not merged yet, but you may be able to provide feedback - does it solve the issue for you?
The Scylla Operator project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
/lifecycle stale
/remove-lifecycle stale
The Scylla Operator project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
/lifecycle stale
Hi, any update? How can i solve this issue?
@Ret2Me if you want to take this, you should probably first try to reproduce the issue, and then try fixing it by raising the storage capacity requirements in the helm charts' default values (and setting a correspondingly high fs.aio-max-nr
in sysctls: see e.g. https://github.com/scylladb/scylla-operator/pull/1013) and then update the generated files. Feel free to ask if you run into any obstacles.
The Scylla Operator project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
/lifecycle rotten
/remove-lifecycle rotten
The Scylla Operator project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
/lifecycle stale
/remove-lifecycle stale
Hello!
I faced the issue that when I follow the instructions described on the page https://operator.docs.scylladb.com/stable/helm.html I couldn't get the running scylla cluster. It looks like that default PV size is 10GB:
if so the pod is failing with the next error message:
I think we need to make the defaults more reasonable and fix default capacity at least to 15GiB: https://github.com/scylladb/scylla-operator/blob/6e9424fa2c4206c1e3e6fd74b9398e5a36d91f26/helm/scylla/values.yaml#L58