Open yrunts opened 4 years ago
Got the same "surprise" on the bill. Probably a bug, there is plenty of metrics reported, this rabbitmq/erlang_vm_allocators
is far above the other ones (around 16MB/day for 3 instances)
:arrow_up: the blue line
I am noticing this too, oddly enough my erlang_vm_allocators
Metric bytes ingested
graph looks like a strong heartbeat which resembles what mine did when my bill went up dramatically after enabling metrics.
This one is sitting at 49.07 B/s of metrics, down from 73 B/s after removing one rabbitmq instance from the cluster. That's a lot of metrics for what I assume to be a small allocator.
Is there any way to selectively disable metrics ingestion for this service?? I want to use stackdriver but not this much.
At first I checked if rabbitmqprometheus plugin supports filtering. Unfortunately not. https://github.com/rabbitmq/rabbitmq-server/discussions/2739 Next i tried to use prometheus-to-sd whitelist feature, but it seems like it requires the exact name of the metric, so for example whitelisted=rabbitmq or whitelisted=rabbitmq_* doesnt work. The only way I see here is to use some external tool that will work between rabbitmq exporter and prometheus-to-sd and filter metrics by mask.
Category:
Kubernetes apps https://console.cloud.google.com/marketplace/details/google/rabbitmq?q=rabbitmq
Type:
After enabling 'Export metrics to Stackdriver' ingestion rate for rabbitmq/erlang_vm_allocators is approximately 75B/s. In my case, rabbitmq cluster with 3 replicas, ingested almost 1GB per month, and in cost around 300EUR.
Maybe it is worth to notify users about additional costs when enabling Enable Stackdriver Metrics Exporter in Marketplace?