Open ipnextgen opened 4 years ago
Related to #894
Isnt'it just to possibly point to the external prometheus IP and port?
Related to #1865
On this same topic, we use newrelic to store our data and they provide a promethus compatible endpoint, which also need some headers to be set to be able to use it. It would be nice to have this supported on lens This documentation explain how to use it with grafana , but give the needed information that would be nice to have in lens too. https://docs.newrelic.com/docs/integrations/grafana-integrations/set-configure/configure-new-relic-prometheus-data-source-grafana/
we run our own prometheus in each eks cluster, is it possible to repoint lens at prometheus running in a different namespace than "lens-metrics" ? so that the disk and memory metrics appear in "cluster" view
Our use case: a Prometheus in agent mode that write metrics to Cortex (could also be Grafana Mimir) in an external cluster.
The same problem
The same problem
To enrich the topics: we use grafana agent shipping logs to grafana cloud.
It would be great to be able to connect to a remote prometheus instance, in order to see metrics without having to install a local prometheus.
Thanks for the amazing work with Lens!
Any news about this feature ?
I solved my own use case (a remote Grafana Mimir cluster) with a simple local proxy. Something like this (in Terraform format):
locals {
mimir_tenant_id = "my-tenant-id"
mimir_username = "my-username"
mimir_password = "my-password"
mimir_host = "mimir.example.com"
}
resource "kubernetes_secret" "mimir_proxy_config_file" {
metadata {
name = "mimir-proxy-config-file"
namespace = kubernetes_namespace.monitoring.id
}
data = {
"default.conf" = <<EOT
server {
listen 80;
server_name localhost;
location / {
proxy_set_header X-Scope-OrgID ${local.mimir_tenant_id};
proxy_set_header Authorization "Basic ${base64encode("${local.mimir_username}:${local.mimir_password}")}";
proxy_pass http://${local.mimir_host}/prometheus/;
}
}
EOT
}
}
resource "kubernetes_deployment" "mimir_proxy" {
metadata {
name = "mimir-proxy"
namespace = kubernetes_namespace.monitoring.id
}
spec {
selector {
match_labels = {
"app.kubernetes.io/name" = "mimir-proxy"
"app.kubernetes.io/instance" = "mimir-proxy"
}
}
template {
metadata {
labels = {
"app.kubernetes.io/name" = "mimir-proxy"
"app.kubernetes.io/instance" = "mimir-proxy"
}
}
spec {
container {
image = "nginx:1.23.0-alpine"
name = "proxy"
volume_mount {
name = "config"
mount_path = "/etc/nginx/conf.d/"
}
port {
name = "http"
container_port = 80
}
}
volume {
name = "config"
secret {
secret_name = kubernetes_secret.mimir_proxy_config_file.metadata[0].name
items {
key = "default.conf"
path = "default.conf"
}
}
}
}
}
}
}
resource "kubernetes_service" "mimir_proxy" {
metadata {
name = "mimir-proxy"
namespace = kubernetes_namespace.monitoring.id
}
spec {
type = "ClusterIP"
selector = {
"app.kubernetes.io/name" = "mimir-proxy"
"app.kubernetes.io/instance" = "mimir-proxy"
}
port {
name = "http"
port = 80
target_port = "http"
}
}
}
Then in Lens I configured the "Prometheus Service Address" as monitoring/mimir-proxy:80
.
This is also useful for using with tools like the Prometheus Adapter with a remote Prometheus.
I have created simple HELM chart for prometheus remote_read with extensive documentation on how to get all necessary details from Grafana Cloud. Enjoy 😉
https://github.com/Container-Driven-Development/Grafana-Cloud-Proxy
I solved my own use case (a remote Grafana Mimir cluster) with a simple local proxy.
thanks @renatomjr for the workaround. in our k8s setup, we used the mimir-querier
service instead of mimir-query-frontend
. for some reason, mimir-query-frontend
was throwing 400 Bad Request to Lens's POST requests.
Я решил свою собственную задачу (удаленный кластер Grafana Mimir) с помощью простого локального прокси-сервера.
Спасибо@renatomjrдля обходного пути. в нашей настройке k8s мы использовали
mimir-querier
службу вместоmimir-query-frontend
. по какой-то причинеmimir-query-frontend
выдавал 400 Bad Request на POST-запросы Lens.
can you tell us more? we also get "err="invalid parameter \"start\": cannot parse \"\" to a valid timestamp""
mimir-querier
I confirm this behaviour I;m using K8sStudio and it's working with mimir-query-frontend. All requests sent by Lens to mimir-query-front end returns this error : err="invalid parameter \"start\": cannot parse \"\" to a valid timestamp" I think the POST query from Lens should use the now() value instead of sending an empty start parameter.
NB: for the ones who use Mimir (installed via Helm chart mimir-ditributed) add the following nginx rule to your helm values in order to handle requests sent to mimir-nginx without the prometheus api prefix '/prometheus'
nginx:
nginxConfig:
serverSnippet: |
# Handle query frontend calls without the prometheus api prefix
location /api/v1/query {
proxy_pass http://mimir-query-frontend.mimir.svc.cluster.local.:8080/prometheus$request_uri;
}
can you tell us more? we also get "err="invalid parameter "start": cannot parse "" to a valid timestamp""
@nordby we ended up using prometheus helm chart with following values instead of nginx proxy:
alertmanager:
enabled: false
kube-state-metrics:
enabled: false
prometheus-node-exporter:
enabled: false
prometheus-pushgateway:
enabled: false
server:
ingress:
enabled: true
hosts:
- <redacted>
persistentVolume:
enabled: false
remoteRead:
- filter_external_labels: false
headers:
X-Scope-OrgID: anonymous
read_recent: true
url: http://mimir-query-frontend:8080/prometheus/api/v1/read
serverFiles:
prometheus.yml:
rule_files: []
scrape_configs: []
lens queries prometheus and prometheus uses the remote read feature to query mimir
What would you like to be added:
Possibility to configure the values of the target prometheus.
Why is this needed:
We dont run prometheus inside the EKS cluster.
Environment you are Lens application on: