Open shankar-vng opened 1 year ago
Thanks for submitting. Will expose a setting to control the retention period.
@shankar-vng, Can you try the latest version with the following helm flag?
helm upgrade --atomic -i -n kubevious \
--version 1.2.1 \
--set collector.historyRetentionDays=5 \
kubevious kubevious/kubevious
Make sure that the MySQL has enough space for the services to come up initially.
Describe the bug
When there is too many events , even 35gb vol fills up to 100% capacity. The vol had way too many snapshot (< 5-7 days) ~ 1GB, unfortunately still waiting for default 14 days recycle period for cleanup which is the expected behaviour I guess. A Storage cleanup action based on configurable % usage that can be set via helm chart will be of gr8 help 🙏🙏
To Reproduce
Create a scenario where lot of events are generated, ideally more than one name space. This could be scenarios like crashing pods/deployment, frequent jobs etc. that causes State change of multiple API resources in the cluster that are watched by Kubevious
Expected behavior
Kubevious should be able to clean up the mysql dumps Best on configurable % usage supported via helm chart
Screenshots
Snapshot from the mysql STS pod