Open simonjcarr opened 1 year ago
You need to be logged in to your cluster and able to do things like kubectl get pods
before running this command. How you do that will vary by Kube vendor and/or hyperscaler.
You might need to do a kubectl config set-context
(to one of the values you see from kubectl config get-contexts
). This accesses info in your ~/.kube/config
file.
Hi, Is there any update on this issue?
I have the same problems with k3s v1.28.7:
operator-sdk version: "v1.33.0" kubernetes version: "1.27.0" go version: "go1.21.5"
I'd appreciate any hints ;)
Hi, I actually managed to figure it out, k3s stores its kubernetes config in /etc/rancher/k3s/k3s.yaml
, so the easiest is to:
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
and rerun the operator-sdk olm install
What did you do? Install the operator on Ubuntu as per the instructions from https://sdk.operatorframework.io/docs/installation/#install-from-github-release
Ran the command
operator-sdk olm install
as per the instructions at https://olm.operatorframework.io/docs/getting-started/What did you expect to see? OLM be installed on my Single Node K3S Cluster
What did you see instead? Under which circumstances?
FATA[0000] Failed to install OLM version "latest": failed to get Kubernetes config: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
Environment
operator-sdk version: "v1.27.0", commit: "5cbdad9209332043b7c730856b6302edc8996faf", kubernetes version: "1.25.0", go version: "go1.19.5", GOOS: "linux", GOARCH: "amd64"
v1.25.6+k3s1
What do I have to set for the environment variable
KUBERNETES_MASTER
?