kubevirt / kubevirt

Kubernetes Virtualization API and runtime in order to define and manage virtual machines.
https://kubevirt.io
Apache License 2.0
5.37k stars 1.3k forks source link

Add a KubeVirt setting that allows adding custom labels to the ServiceMonitor #10219

Closed elchenberg closed 3 months ago

elchenberg commented 1 year ago

Is your feature request related to a problem? Please describe: A clear and concise description of what the problem is.

I want to collect the kubevirt-prometheus-metrics. I have configured KubeVirt to create a service monitor:

apiVersion: kubevirt.io/v1
kind: KubeVirt
metadata:
  name: kubevirt
  namespace: kubevirt
  [...]
spec:
  monitorAccount: kube-prometheus-stack-prometheus
  monitorNamespace: monitoring
  serviceMonitorNamespace: kubevirt
  [...]

This seems to work without issues because now there is a ServiceMonitor in my kubevirt namespace. But the metrics are not collected because my Prometheus is configured with a serviceMonitorSelector:

  serviceMonitorSelector:
    matchLabels:
      release: kube-prometheus-stack

I could not find a way to tell KubeVirt to add this label to the ServiceMonitor.

Describe the solution you'd like: A clear and concise description of what you want to happen.

The KubeVirt CRD has a serviceMonitorSelector (or serviceMonitorLabel, serviceMonitorAdditionalLabels, ...) setting that can be used to configure a label that will be added to the ServiceMonitor, for example:

apiVersion: kubevirt.io/v1
kind: KubeVirt
metadata:
  name: kubevirt
  namespace: kubevirt
  [...]
spec:
  monitorAccount: kube-prometheus-stack-prometheus
  monitorNamespace: monitoring
  serviceMonitorNamespace: kubevirt
  serviceMonitorSelector:
    release: kube-prometheus-stack
  [...]

Describe alternatives you've considered: A clear and concise description of any alternative solutions or features you've considered.

I added the label manually and Prometheus started to scrape the metrics. But since the ServiceMonitor is managed by the virt-operator I would like to avoid manual changes.

Additional context: Add any other context or screenshots about the feature request here.

Kubevirt v1.0.0

kubevirt-bot commented 9 months ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

elchenberg commented 9 months ago

/remove-lifecycle stale

matletix commented 8 months ago

+1

kubevirt-bot commented 5 months ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

kubevirt-bot commented 4 months ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

/lifecycle rotten

kubevirt-bot commented 3 months ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

/close

kubevirt-bot commented 3 months ago

@kubevirt-bot: Closing this issue.

In response to [this](https://github.com/kubevirt/kubevirt/issues/10219#issuecomment-2091248298): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.