I just deployed alloy via the k8s-monitoring helm chart and noticed the memory usage was fairly high for a single node cluster with only a handful of applications: 700M or so.
While investigating, I discovered the default api-server metrics have extremely high cardinality and I had about 50k active time series. By dropping a few metrics, I was able to get this down to around 10K but memory usage only came down to about 500M. I then got more aggressive and simply disabled the apiserver integration. This gives about 2k active time series and memory is down to 350M. My last step was to remove all relabels (I had one per job to change the name) which got me down to about 300M steady state.
This seems quite high for the baseline of a small single node cluster. I understand there's going to be some flat minimum usage for running alloy but I wanted to check - is this expected for such a small workload?
By way of comparison, the backend is victoria-metrics and the memory usage is always lower than alloy's steady state.
Here's a visual. You can see the step function as I restart with each successively smaller active time series count:
Configuration is very straight forward (see below).
What's wrong?
I just deployed alloy via the k8s-monitoring helm chart and noticed the memory usage was fairly high for a single node cluster with only a handful of applications: 700M or so.
While investigating, I discovered the default api-server metrics have extremely high cardinality and I had about 50k active time series. By dropping a few metrics, I was able to get this down to around 10K but memory usage only came down to about 500M. I then got more aggressive and simply disabled the apiserver integration. This gives about 2k active time series and memory is down to 350M. My last step was to remove all relabels (I had one per job to change the name) which got me down to about 300M steady state.
This seems quite high for the baseline of a small single node cluster. I understand there's going to be some flat minimum usage for running alloy but I wanted to check - is this expected for such a small workload?
By way of comparison, the backend is victoria-metrics and the memory usage is always lower than alloy's steady state.
Here's a visual. You can see the step function as I restart with each successively smaller active time series count:
Configuration is very straight forward (see below).
Installed via helm
Steps to reproduce
System information
Kubernetes v1.30.1+k3s1 /
Software version
v1.2
Configuration
Note - this is being deployed as a subchart from Argocd so the values are under
k8s-monitoring
.Helm values:
Alloy config:
Logs
No response