lensapp / lens

Lens - The way the world runs Kubernetes
https://k8slens.dev/
MIT License
22.51k stars 1.47k forks source link

Possibility to use an external prometheus #909

Open ipnextgen opened 4 years ago

ipnextgen commented 4 years ago

What would you like to be added:

Possibility to configure the values of the target prometheus.

Why is this needed:

We dont run prometheus inside the EKS cluster.

Environment you are Lens application on:

nevalla commented 4 years ago

Related to #894

ipnextgen commented 4 years ago

Isnt'it just to possibly point to the external prometheus IP and port?

Nokel81 commented 3 years ago

Related to #1865

marcelobartsch-jt commented 3 years ago

On this same topic, we use newrelic to store our data and they provide a promethus compatible endpoint, which also need some headers to be set to be able to use it. It would be nice to have this supported on lens This documentation explain how to use it with grafana , but give the needed information that would be nice to have in lens too. https://docs.newrelic.com/docs/integrations/grafana-integrations/set-configure/configure-new-relic-prometheus-data-source-grafana/

repudi8or commented 2 years ago

we run our own prometheus in each eks cluster, is it possible to repoint lens at prometheus running in a different namespace than "lens-metrics" ? so that the disk and memory metrics appear in "cluster" view

renatomjr commented 2 years ago

Our use case: a Prometheus in agent mode that write metrics to Cortex (could also be Grafana Mimir) in an external cluster.

maxpain commented 2 years ago

The same problem

HYmian commented 2 years ago

The same problem

Lp-Francois commented 1 year ago

To enrich the topics: we use grafana agent shipping logs to grafana cloud.

It would be great to be able to connect to a remote prometheus instance, in order to see metrics without having to install a local prometheus.

Thanks for the amazing work with Lens!

ktibi commented 1 year ago

Any news about this feature ?

renatomjr commented 1 year ago

I solved my own use case (a remote Grafana Mimir cluster) with a simple local proxy. Something like this (in Terraform format):

locals {
  mimir_tenant_id = "my-tenant-id"
  mimir_username  = "my-username"
  mimir_password  = "my-password"
  mimir_host      = "mimir.example.com"
}

resource "kubernetes_secret" "mimir_proxy_config_file" {
  metadata {
    name      = "mimir-proxy-config-file"
    namespace = kubernetes_namespace.monitoring.id
  }
  data = {
    "default.conf" = <<EOT
server {
    listen       80;
    server_name  localhost;
    location / {
        proxy_set_header X-Scope-OrgID ${local.mimir_tenant_id};
        proxy_set_header Authorization "Basic ${base64encode("${local.mimir_username}:${local.mimir_password}")}";
        proxy_pass http://${local.mimir_host}/prometheus/;
    }
} 
EOT
  }
}

resource "kubernetes_deployment" "mimir_proxy" {
  metadata {
    name      = "mimir-proxy"
    namespace = kubernetes_namespace.monitoring.id
  }
  spec {
    selector {
      match_labels = {
        "app.kubernetes.io/name"     = "mimir-proxy"
        "app.kubernetes.io/instance" = "mimir-proxy"
      }
    }
    template {
      metadata {
        labels = {
          "app.kubernetes.io/name"     = "mimir-proxy"
          "app.kubernetes.io/instance" = "mimir-proxy"
        }
      }
      spec {
        container {
          image = "nginx:1.23.0-alpine"
          name  = "proxy"
          volume_mount {
            name       = "config"
            mount_path = "/etc/nginx/conf.d/"
          }
          port {
            name           = "http"
            container_port = 80
          }
        }
        volume {
          name = "config"
          secret {
            secret_name = kubernetes_secret.mimir_proxy_config_file.metadata[0].name
            items {
              key  = "default.conf"
              path = "default.conf"
            }
          }
        }
      }
    }
  }
}

resource "kubernetes_service" "mimir_proxy" {
  metadata {
    name      = "mimir-proxy"
    namespace = kubernetes_namespace.monitoring.id
  }
  spec {
    type = "ClusterIP"
    selector = {
      "app.kubernetes.io/name"     = "mimir-proxy"
      "app.kubernetes.io/instance" = "mimir-proxy"
    }
    port {
      name        = "http"
      port        = 80
      target_port = "http"
    }
  }
}

Then in Lens I configured the "Prometheus Service Address" as monitoring/mimir-proxy:80.

This is also useful for using with tools like the Prometheus Adapter with a remote Prometheus.

elmariofredo commented 1 year ago

I have created simple HELM chart for prometheus remote_read with extensive documentation on how to get all necessary details from Grafana Cloud. Enjoy 😉

https://github.com/Container-Driven-Development/Grafana-Cloud-Proxy

ztzxt commented 2 months ago

I solved my own use case (a remote Grafana Mimir cluster) with a simple local proxy.

thanks @renatomjr for the workaround. in our k8s setup, we used the mimir-querier service instead of mimir-query-frontend. for some reason, mimir-query-frontend was throwing 400 Bad Request to Lens's POST requests.

nordby commented 4 weeks ago

Я решил свою собственную задачу (удаленный кластер Grafana Mimir) с помощью простого локального прокси-сервера.

Спасибо@renatomjrдля обходного пути. в нашей настройке k8s мы использовали mimir-querierслужбу вместо mimir-query-frontend. по какой-то причине mimir-query-frontendвыдавал 400 Bad Request на POST-запросы Lens.

can you tell us more? we also get "err="invalid parameter \"start\": cannot parse \"\" to a valid timestamp""

ohayak commented 3 weeks ago

mimir-querier

I confirm this behaviour I;m using K8sStudio and it's working with mimir-query-frontend. All requests sent by Lens to mimir-query-front end returns this error : err="invalid parameter \"start\": cannot parse \"\" to a valid timestamp" I think the POST query from Lens should use the now() value instead of sending an empty start parameter.

NB: for the ones who use Mimir (installed via Helm chart mimir-ditributed) add the following nginx rule to your helm values in order to handle requests sent to mimir-nginx without the prometheus api prefix '/prometheus'

nginx:
  nginxConfig:
    serverSnippet: |
      # Handle query frontend calls without the prometheus api prefix
      location /api/v1/query {
        proxy_pass http://mimir-query-frontend.mimir.svc.cluster.local.:8080/prometheus$request_uri;
      }
ztzxt commented 1 week ago

can you tell us more? we also get "err="invalid parameter "start": cannot parse "" to a valid timestamp""

@nordby we ended up using prometheus helm chart with following values instead of nginx proxy:

alertmanager:
  enabled: false
kube-state-metrics:
  enabled: false
prometheus-node-exporter:
  enabled: false
prometheus-pushgateway:
  enabled: false
server:
  ingress:
    enabled: true
    hosts:
      - <redacted>
  persistentVolume:
    enabled: false
  remoteRead:
    - filter_external_labels: false
      headers:
        X-Scope-OrgID: anonymous
      read_recent: true
      url: http://mimir-query-frontend:8080/prometheus/api/v1/read
serverFiles:
  prometheus.yml:
    rule_files: []
    scrape_configs: []

lens queries prometheus and prometheus uses the remote read feature to query mimir