Open starkers opened 4 years ago
@starkers Lens should already auto-detect prometheus operator created prometheus installation, see: https://github.com/lensapp/lens/blob/master/src/main/prometheus/operator.ts#L10 .
interesting.. it doesn't detect mine.. I'll try to find out why and report back
Lens: 3.6.4
Electron: 9.1.2
Chrome: 83.0.4103.122
Copyright 2020 Mirantis, Inc.
Can't query metrics from cluster with prometheus-operator.
UPDATE:
I set address manually as monitoring/prometheus-kube-prometheus-prometheus:9090
, Lens still show errors whit OLD settings. Working good after restart Lens.
Auto-detection worked fine for me until and including version 3.6.9. After upgrading to 4.0.0 Lens can't connect the our prometheus anymore. It doesn't matter if it's set to auto detect or with manual configuration, even after restarting Lens.
Update: We have 4 vanilla K8s clusters and 4 Rancher clusters. The vanilla ones have all the same Prometheus-Operator installation. In one cluster Prometheus can not be detected. Regarding the Rancher clusters, not a single Prometheus is detected. All have an up2date installation of Prometheus via the Helm chart. Before the update to 4.0.0 all Prometheis were detected.
Same here. Can't get metrics working.
Same. According to the debug console, Lens doesn't actually honor the configuration, and is still looking in the lens-metrics namespace, even when the settings say otherwise.
Guys! Just restart lens after set configuration
Same issue here (Lens v4.0.8) on a new K8s cluster with kube-prometheus stack installed. Changed the cluster settings in Lens to use "Prometheus Operator" and set the service name accordingly. Restarted both the app and my machine (Just-in-case) and looking at the dev-console. its still trying to hit the lens-metrics endpoints...
The URLs its looking for in my console are:
GET http://localhost:41371/api-kube/apis/apps/v1/namespaces/lens-metrics/statefulsets/prometheus 404 (Not Found)
POST http://[REDACTED].localhost:41371/api/metrics?start=1611689640&end=1611693240&step=60&kubernetes_namespace= net::ERR_EMPTY_RESPONSE
GET http://localhost:41371/api-kube/apis/apps/v1/namespaces/lens-metrics/statefulsets/prometheus 404 (Not Found)
As per @terrafying I dont think its honoring the configuration.
same issue. Can't see node and pod level metrics lens : 4.0.7
Same issue lens: 4.0.8
Maybe #3653 will resolve this?
I lost any hope long time ago.
@korjavin Do you have list services --all-namespaces
permissions? If not, then I think there is a place that we can improve the finding of prometheus. If you look at your logs (by running Lens from the console for instance) do you see something like Helm: failed to list services:
?
I see only these lines when I start from console. (I use appimage distribution)
/opt/Lens --no-sandbox
info: ▪ 📟 Setting Lens as protocol client for lens:// +0ms
info: ▪ 📟 Protocol client register failed ❗ +16ms
Since you are on linux, your log files should be under ~/.config/Lens/logs/
I don't see any errors in those files.
info: [KUBE-AUTH]: out-channel "kube-auth:bb3e7201bf564982939f4afd506acc06" {"data":"Authentication proxy started\n","meta":{"id":"bb3e7201bf564982939f4afd506acc06","name":"data","ready":false,"online":false,"accessible":false,"disconnected":true}}
info: [CLUSTER]: refresh {"id":"bb3e7201bf564982939f4afd506acc06","name":"data","ready":true,"online":true,"accessible":true,"disconnected":false}
If you open up the devtools do you see any failures in the console?
Only this, seems unrelated
instrument.js:109 [IPC]: failed to send IPC message "renderer:cluster-id-of-active-view" to view "unknown=14"
{error: "Error: Could not call remote method 'send'. Check …ectron/dist/main/integrations/electron.js:63:25)↵"}
instrument.js:109 [IPC]: failed to send IPC message "history:can-go-back" to view "unknown=14"
{error: "Error: Could not call remote method 'send'. Check …ectron/dist/main/integrations/electron.js:63:25)↵"}
instrument.js:109 [IPC]: failed to send IPC message "history:can-go-forward" to view "unknown=14"
{error: "Error: Could not call remote method 'send'. Check …ectron/dist/main/integrations/electron.js:63:25)↵"}
Yeah that is unrelated
This one seems related, but it's not an error
[JSON-API] request POST http://127.0.0.1:42917/api/metrics?start=1634132400&end=1634136000&step=60&kubernetes_namespace=
{reqInit: {…}, data: {…}}
data: {memoryUsage: {…}, workloadMemoryUsage: {…}, memoryRequests: {…}, memoryLimits: {…}, memoryCapacity: {…}, …}
reqInit: {headers: {…}, method: "post", body: "{"memoryUsage":{"category":"cluster","nodes":"gke-…9305d-vss8|gke-data-proxy-server-8d129389-3g1r"}}"}
__proto__: Object
array(0) perhaps a problem
yes that would be, so you will probably like the fix from #3653 which is currently available in the 5.3.0-alpha.2
release
Yes, mine is still 5.2? trying to update
It is not currently possible to upgrade from a latest
release to an alpha release within lens. You have to do in manually. You can grab the alpha release binaries from the community slack https://k8slens.slack.com/archives/CDR6AHSCC/p1634133637169700?thread_ts=1634133515.169600&cid=CDR6AHSCC
no luck
You need to change your prometheus setting from "Helm" to "Operator"
I tried
I did reconect, and even restart of lens. (with killing all the background processes that lens like to leave)
Do you have the kubernetes_node
label being used, as is mentioned in https://github.com/lensapp/lens/blob/master/troubleshooting/custom-prometheus.md#kube-prometheus ?
Seems, that I missed that part. I will be back with update.
We have https://github.com/lensapp/lens/issues/3955 which tracks a more visible source of that
Do you know an example of metric for node and for pod, so I could check manually if relabelling works well?
I added kubernetes_node
mapping, but still don't see any difference
For example I queried metric node_network_receive_drop_total
And I can see kubernetes_node labels.
A kind attempt to attract attention to this by posting again ;)
Do you know an example of metric for node and for pod, so I could check manually if relabelling works well?
To enable automatic service discovery for a prometheus-operator like service (I'm also using victoriametrics), you'll need to add two labels to your service. We should use more standardized labels to support the automatic discovery than the ones listed below.
kind: Service
apiVersion: v1
metadata:
name: prometheus
namespace: monitoring
labels:
app: prometheus
# Labels used for k8s lens
+ operated-prometheus: "true"
+ self-monitor: "true"
spec:
type: ClusterIP
ports:
- name: http
port: 9090
protocol: TCP
selector:
app: prometheus
Having the same issue. Pod metrics are available though. Node and cluster metrics are missing
What would you like to be added:
It would be really nice to improve the way we auto-discover prometheus
Right now its a very manual process, which makes me sad given that I am:
My example usecase has:
Prometheus
Object inside themonitoring
namespacek8s
and configured to serve traffic under the URI of/prom/
So when I add a cluster.. I need to set:
monitoring/prometheus-k8s:9090/prom
Firstly.. I would suggest that the hint should probably be
<namespace>/<service>:<port>/<routePrefix>
here:Secondly.. we can extend lens to look at the coreos operator's actual object to discover the service(s) available to the user.
See: Prometheus Spec
From that object we can extract all the required fields automatically..
metadata.namespace
metadata.name
spec.portName
(might need an extra query to check the number)spec.routePrefix
(eg:/prom
)Additionally we can also check the status via these objects to be more efficient generally..
This would also be great for multi-tenency situations where users may just have a single namespace and their own prometheus.... anyway.. just an idea