Closed mossroy closed 8 months ago
For product-related issues, you have to open a ticket using Lens repo
Maybe I was not clear in my description, but this issue does not occur in latest Lens app distributed by Mirantis, while it occurs in latest OpenLens distributed here. So I don't see the point of reporting it in Lens repo, where it would logically be closed as "could not reproduce".
Maybe it could come from the fact that latest OpenLens package was released almost 6 months ago here, and this issue might have existed in Lens, and been solved since then?
Try for example command:
time kubectl version
, and run it 10 times in a row in 2 different contexts: inside your client OS terminal (with appropriate KUBECONFIG en var), and inside OpenLens terminal (connected to the corresponding cluster)I tested with latest Lens (2023.11.131420-latest), and the response time is stable with it, around 70 milliseconds.
To investigate further, I tried the corresponding kubeapi HTTP call, that you can find by running
kubectl version -v=6
In the OS terminal, it directly points to the kube control plane IP, likehttps://x.x.x.x:6443/version?timeout=32s
. In OpenLens & Lens, it points to something likehttps://127.0.0.1:35047/4bc5aa7ab8cdb1ee83ef85010323c356/version?timeout=32s
(because it injects a specific KUBECONFIG that seems to proxy all HTTP requests through a local reverse-proxy)So I took the bearer token from secret default-token-xxxxx of the kube cluster, exported it in a $BEARER variable of my client OS terminal, and executed (outside of OpenLens):
-> stable time around ~70ms
-> time varying between 70ms and 5s
So it seems to me that the delays are introduced by the local reverse-proxy hosted by OpenLens.
I tested with different kubernetes servers: a k3s 1.27.7 running locally, a k3s 1.27.7 running remotely, a remote k8s 1.21.7, and a remote GKE 1.27.5