lensapp / lens

Lens - The way the world runs Kubernetes
https://k8slens.dev/
MIT License
22.53k stars 1.47k forks source link

Add VictoriaMetrics support #5837

Open sergeyshaykhullin opened 2 years ago

sergeyshaykhullin commented 2 years ago

What would you like to be added:

Add prometheus endpoint as url: https://github.com/lensapp/lens/issues/337 image image It has to be /select/0/prometheus/api/v1/query_range?... to work with Victoria Metrics

Why is this needed: VictoriaMetrics has custom path for prometheus api

Environment you are Lens application on: More dedtails: https://github.com/lensapp/lens/issues/337#issuecomment-1173153850

valyala commented 2 years ago

The temporary workaround is to run VictoriaMetrics behind an http proxy such as nginx or vmauth, which would route incoming requests from /api/v1/* to /select/0/prometheus/api/v1/* according to url paths in VictoriaMetrics cluster

b-a-t commented 2 years ago

I was trying to look into the code to make OpenLens to work with my VM cluster, but out of a sudden it works out of the box O_O

@sergeyshaykhullin can you give a try the recent version? I have OpenLens: 6.0.0-latest.1659305606594 to be exact and:

image image
sergeyshaykhullin commented 2 years ago

I will try, i am using Helm/Helm 14 setup

akaillidan commented 1 year ago

little hack, how to connect to external cluster VM. i use vmauth on my cluster, so i need nginx basic auth.

So I create simple deployment and service nginx.

default.conf

server {
  listen 80;

  set_real_ip_from 172.10.0.0/16;

  location / {
        proxy_ssl_server_name   on;
        proxy_ssl_name          $proxy_host;
        proxy_set_header        Host  vmselect.domain.com;
        proxy_set_header        X-Real-IP $remote_addr;
        proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header        X-Forwarded-Proto $scheme;
        proxy_pass              https://vmselect.domain.com/select/0/prometheus/;
        proxy_read_timeout      90;
        proxy_set_header        Authorization "Basic bG9naW46UGFzc3dvcmQK";
       # proxy_redirect          https://vmagent.domain.com/ https://vmselect.domain.com/select/0/prometheus/;
  }

  location /healthz {
        access_log off;
        return 200;
  }

}

deploy.yaml

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: CI_PROJECT_NAME
  labels:
    app: CI_PROJECT_NAME
spec:
  selector:
    matchLabels:
      app: CI_PROJECT_NAME
  strategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: CI_PROJECT_NAME
    spec:
      containers:
      - name: CI_PROJECT_NAME
        image: CI_IMAGE_REPOSITORY
        imagePullPolicy: IfNotPresent
        readinessProbe:
          httpGet:
            path: /healthz
            port: 80
          initialDelaySeconds: 15
          periodSeconds: 10
          timeoutSeconds: 2
          successThreshold: 2
      imagePullSecrets:
      - name: CI_GITHUB_REGISTRY
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: CI_PROJECT_NAME
  name: CI_PROJECT_NAME
spec:
  ports:
  - name: CI_PROJECT_NAME
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: CI_PROJECT_NAME
  sessionAffinity: None
  type: ClusterIP

bG9naW46UGFzc3dvcmQK - base64 (echo -n "login:Password" | base64)

then we can connect with option Helm and for example prometheus/vmselect-nginx-proxy:80

partymaker-py commented 1 year ago

@akaillidan you saved me a few hours. Thanks a lot!

perfectra1n commented 9 months ago

Just adding my 2c here, when using VMsingle, you just need to specify the following: image

monitoring/vmsingle-victoria-metrics-single-server:8428
<vmsingle_namespace>/<vmsingle_service>:8428
vainkop commented 7 months ago

None of this works with VM cluster. @valyala do by any chance you have any workaround for VM cluster?

valyala commented 7 months ago

@vainkop , you can use the following vmauth config for proxying /api/v1/* requests to tenant 0 at vmselect according to url format in VictoriaMetrics cluster:

unauthorized_user:
  url_map:
  - src_paths: ["/api/v1/.+]
    url_prefix:
    - http://vmselect-1:8481/select/0/prometheus/
    - http://vmselect-2:8481/select/0/prometheus/

See more details here.

roberto-iannone-riatlas commented 7 months ago

Try to use vm/vm-select:8481//select/0/prometheus/ it works for me

vainkop commented 7 months ago

Try to use vm/vm-select:8481//select/0/prometheus/ it works for me

The // in your vm/vm-select:8481//select/0/prometheus/ is a typo? Are you using VM cluster, not VM single?

vainkop commented 7 months ago

@vainkop , you can use the following vmauth config for proxying /api/v1/* requests to tenant 0 at vmselect according to url format in VictoriaMetrics cluster:

unauthorized_user:
  url_map:
  - src_paths: ["/api/v1/.+]
    url_prefix:
    - http://vmselect-1:8481/select/0/prometheus/
    - http://vmselect-2:8481/select/0/prometheus/

See more details here.

I will try that. It's weird that smth like that is needed while it works for VM single without a need for any proxy & I have only 1 tenant anyways.

Also why do I need to specify 2 vmselect if I have a single k8s service which has 2 endpoints? Shouldn't I specify just the k8s service?

Btw I'm using the following helm chart (but for v1.97): https://github.com/VictoriaMetrics/helm-charts/blob/victoria-metrics-k8s-stack-0.18.12/charts/victoria-metrics-k8s-stack/values.yaml

vmcluster:
  enabled: true
  annotations: {}

  spec:
    retentionPeriod: "14"
    replicationFactor: 2

    vmstorage:
      image:
        tag: v1.97.1-cluster
      replicaCount: 2
      storageDataPath: "/vm-data"

      extraArgs:
        dedup.minScrapeInterval: "10s"

      storage:
        volumeClaimTemplate:
          spec:
            resources:
              requests:
                storage: 200Gi

      resources:
        requests:
          cpu: "2"
          memory: "2Gi"
        limits:
          cpu: "4"
          memory: "4Gi"

    vmselect:
      image:
        tag: v1.97.1-cluster

      replicaCount: 2
      cacheMountPath: "/select-cache"

      extraArgs:
        dedup.minScrapeInterval: "10s"

      storage:
        volumeClaimTemplate:
          spec:
            resources:
              requests:
                storage: 10Gi

      resources:
        requests:
          cpu: "1"
          memory: "500Mi"
        limits:
          cpu: "1"
          memory: "1Gi"

    vminsert:
      image:
        tag: v1.97.1-cluster
      replicaCount: 2

      extraArgs:
        maxLabelsPerTimeseries: "50"

      resources:
        requests:
          cpu: "1"
          memory: "1Gi"
        limits:
          cpu: "2"
          memory: "2Gi"

It seems that I need to use a separate chart to install VM auth though, while it would be logical to be able to enable it in this one...

vainkop commented 7 months ago

@vainkop , you can use the following vmauth config for proxying /api/v1/* requests to tenant 0 at vmselect according to url format in VictoriaMetrics cluster:

unauthorized_user:
  url_map:
  - src_paths: ["/api/v1/.+]
    url_prefix:
    - http://vmselect-1:8481/select/0/prometheus/
    - http://vmselect-2:8481/select/0/prometheus/

See more details here.

That helped, thank you @valyala !

I've installed VM auth using this chart 0.4.7 +

config:
  unauthorized_user:
    url_map:
    - src_paths:
      - "/api/v1/.+"
      url_prefix:
      - "http://vmselect-vm:8481/select/0/prometheus/"

& in Lens: Prometheus Operator & monitoring/vm-auth:8427 image

Also I've upgraded VictoriaMetrics cluster to 1.99 image

roberto-iannone-riatlas commented 7 months ago

Try to use vm/vm-select:8481//select/0/prometheus/ it works for me

The // in your vm/vm-select:8481//select/0/prometheus/ is a typo? Are you using VM cluster, not VM single?

No it is not a typo .. it is a trick ;) ... I'm using VM cluster and it works fine .. the double '/' act as an escape allowing lens to make the correct query

genki commented 3 months ago

I had tried using vmsingle but no request comes from the Lens even if set the service address to default/vmsingle-victoria-metrics-single-server:8428/prometheus. I am using Lens: 2024.7.161041-latest. The latest version omited this feature?

fatsolko commented 3 weeks ago

Try to use vm/vm-select:8481//select/0/prometheus/ it works for me

The // in your vm/vm-select:8481//select/0/prometheus/ is a typo? Are you using VM cluster, not VM single?

No it is not a typo .. it is a trick ;) ... I'm using VM cluster and it works fine .. the double '/' act as an escape allowing lens to make the correct query

Thank you very much! It helps