Open KieranP opened 2 months ago
Generally, Updating retention period is a very resource intensive operation. We recommend to increase your resource buffer allocated based on your current data in SigNoz. So, if you have 10TB data in SigNoz, then you need more resource buffer for this operation than if you have 1 TB data stored in your SigNoz setup
@pranay01 We have about 30GB of data so far, most of it logs though. An our server has 4 CPUs and 16GB of RAM. But I'm not able to change the retention for traces without it maxing out CPU at 100% for a while before eventually failing. Is that expected?
Bug description
I setup Signoz earlier this week (5 days ago). The initial setting was for traces to remain for 7 days.
However, after setting up S3 cold storage, I am updating the settings to retain for 1 month, and send to S3 after 7 days instead.
However, when I try to change the settings, CPU pegs at 100% for a long time and eventually Signoz errors out.
Why would it be doing this when the data is new enough that nothing needs to be sent to S3 or removed?
What would be causing this maxed out and sustains CPU usage?
Expected behavior
It would simply update the config realising that all traces are under the retention dates
How to reproduce
Install Signoz, collect 5 days worth of traces, change settings to retain for 1 month and send to S3 after 7 days.
Version information