JanDeDobbeleer / oh-my-posh

The most customisable and low-latency cross platform/shell prompt renderer
https://ohmyposh.dev
MIT License
17.02k stars 2.37k forks source link

omp emits an go dial error if k8s is not available #2583

Closed jlabonski closed 2 years ago

jlabonski commented 2 years ago

Code of Conduct

What happened?

With OMP configured like this:

{
    "type": "kubectl",
    "powerline_symbol": "\uE0B0",
    "style": "diamond",
    "trailing_diamond": "\ue0b4",
    "foreground": "#012B36",
    "background": "#D33682",
    "template": " \uFD31 {{.Context}}::{{if .Namespace}}{{.Namespace}}{{else}}default{{end}}",
    "properties": {
        "parse_kubeconfig": true
    }
}

And connected to a copy of k8s installed on my laptop, all is well. If I turn the k8s system off to save memory, I get this error every time I run zsh:

I0727 16:09:50.024605   38855 versioner.go:58] Get https://127.0.0.1:6443/version?timeout=5s: dial tcp 127.0.0.1:6443: connect: connection refused
I0727 16:09:50.310854   38959 versioner.go:58] Get https://127.0.0.1:6443/version?timeout=5s: dial tcp 127.0.0.1:6443: connect: connection refused

Anticipated behavior:

Simple solution: redirect dial error for kubectl/kubelet to /dev/null Ideal: place an error marker in the template: {{if .error}}❌{{end}}

Theme

Handrolled

What OS are you seeing the problem on?

macOS

Which shell are you using?

zsh

Log output

No errors noted; I can't paste entire logs here due to sensitive config files being read.
JanDeDobbeleer commented 2 years ago

@jlabonski but wait, versioner.go isn't something that oh-my-posh outputs, as that's not part of our codebase. We call cubectl, any output is only inside oh-my-posh unless they somehow spawn a second process that starts printing things. I'll have a look at who could possibly emit that.

JanDeDobbeleer commented 2 years ago

@jlabonski had a quick glance, we don't print anything. I'll see if routing to dev/null fixes something, but if that has no effect this is a bug for kubectl.

JanDeDobbeleer commented 2 years ago

@jlabonski we already redirect stderr to a buffer, which we do not print. I only have kubectl installed, not k8s and I get the same error on the command BUT it does not print to the shell, only internally in oh-my-posh. There have been some reports about this on the cubectl repo, and there was an issue in the past. Can you update kubectl and validate if the error persists? I will need the logs (redacted) and a way to really reproduce this if the problem persists.

Additionally "parse_kubeconfig": true should not even call kubectl, so please make sure oh-my-posh is also up-to-date.

2022/07/28 07:37:06 CommandPath duration: 36.313µs, args: kubectl
2022/07/28 07:37:06 HasCommand duration: 38.582µs, args: kubectl
2022/07/28 07:37:06 debug: RunCommand
apiVersion: v1
clusters: null
contexts:
- context:
    cluster: ""
    user: ""
  name: jan
current-context: jan
kind: Config
preferences: {}
users: null
2022/07/28 07:37:06 RunCommand duration: 35.050083ms, args: kubectl config view --output yaml --minify
jlabonski commented 2 years ago

Thanks for the speedy replies! I spent some more time digging around the guts of zsh and found the culprit. Rancher desktop installs kubectl, but it's actually using kubelrl as a wrapper. Inside of the rancher desktop project this is a known bug:

https://github.com/rancher-sandbox/rancher-desktop/issues/1260

With the actual culprit here:

https://github.com/flavio/kuberlr/blob/77846a03df781baef610ed7f24783e17f34eaef4/internal/finder/versioner.go#L58

I had apparently been running this merrily without issue for a long while, and my install of omp coincided with the turning off of k8s for a memory intensive job I was working on. I feel like an idiot, I should have double checked this more deeply before raising the issue. At least I know what the root cause was.

JanDeDobbeleer commented 2 years ago

@jlabonski no need to apologize, good we sorted it out!

github-actions[bot] commented 9 months ago

This issue has been automatically locked since there has not been any recent activity (i.e. last half year) after it was closed. It helps our maintainers focus on the active issues. If you have found a problem that seems similar, please open a discussion first, complete the body with all the details necessary to reproduce, and mention this issue as reference.