Open wangjinxiang0522 opened 3 months ago
Pinging code owners:
receiver/prometheus: @Aneurysm9 @dashpole
See Adding Labels via Comments if you do not have permissions to add labels yourself.
Does it stick around for about 5 mintues? If so, this sounds like we are missing staleness markers when a pod is evicted.
Does it stick around for about 5 mintues? If so, this sounds like we are missing staleness markers when a pod is evicted.
@dashpole Yes,thanks for your reply, how should I modify the parameters to solve this issue?
Does this happen only when a pod is evicted? Or also when a pod is deleted?
Does this happen only when a pod is evicted? Or also when a pod is deleted?
Yes, it happens when a pod is evicted or deleted.
my best guess is that when we apply the new config to the service manager and discovery manager, it removes the targets without generating a staleness marker. But fixing it will probably be a change in the prometheus server (promethues/prometheus). We need to reproduce this with the prometheus server (updating the config file to remove a static target), and see if the series is marked stale (if the line correctly stops) in the graph.
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers
. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself.
Component(s)
receiver/prometheus exporter/prometheusremotewrite
Describe the issue you're reporting
Actual Result