Closed DaemonDude23 closed 2 years ago
As another data point: we've got the Starboard Operator working successfully on EKS 1.21, using default settings, but with slightly later versions of Starboard and the Helm chart.
This is a known working combination:
1.21
, platform version eks.4
1.21.5-20220309
0.10.4
0.15.4
@DaemonDude23 have you tried it with more recent versions?
I've tested it after 5-10 releases following my filing of this issue. All the same result, including today once I saw your message. I updated to the latest chart and starboard version, but the same error persists. Side-by-side diffed the latest values
against mine. The only diffs I have are for resources
and podAnnotations
so nothing that should cause this kind of error.
Had the same error on a bare-metal pure k8s homelab as well as a k3s cluster. Surely others would run into this problem, but it seems not and I'm the only outlier.
1.21
, platform version eks.4
ami-02f29c095430282d4
(I recently switched to Bottlerocket, same error as when I was using the standard EKS AMIs)0.10.4
0.15.4
Digging a bit further based on your error message:
getting kube client config: invalid configuration: no configuration has been provided, try setting
KUBERNETES_MASTER environment variable
As part of pkg/operator/operator.go:
kubeConfig, err := ctrl.GetConfig()
if err != nil {
return fmt.Errorf("getting kube client config: %w", err)
}
So it's using the Controller Runtime to find a kubeconfig it can use to connect to the K8s API. Usually when running inside the cluster you don't have to set anything, it will use the in-cluster config and the service account token.
You can see the order of precedence in controller-runtime/config.go:
// GetConfig creates a *rest.Config for talking to a Kubernetes API server.
// If --kubeconfig is set, will use the kubeconfig file at that location. Otherwise will assume running
// in cluster and use the cluster provided kubeconfig.
//
// It also applies saner defaults for QPS and burst based on the Kubernetes
// controller manager defaults (20 QPS, 30 burst)
//
// Config precedence
//
// * --kubeconfig flag pointing at a file
//
// * KUBECONFIG environment variable pointing at a file
//
// * In-cluster config if running in cluster
//
// * $HOME/.kube/config if exists.
So a couple of possible thoughts:
A few more references down this line of thinking:
/var/run/secrets/kubernetes.io/serviceaccount/token
Thanks for all the info.
I found my problem (100% user error). In a manifest that I was using kustomize to patch automountServiceAccountToken: false
on the deployment. I thought that would disable auto-mounting of the default service account (which wouldn't be applicable as we're not using the default in-cluster one anyway), not the one explicitly assigned to the pod(s).
I'll do some testing with it tomorrow and will likely close this issue then.
Thanks for jogging my brain to look at service account token mounting! I completely forgot I was using a patch.
What steps did you take and what happened:
Deployed Helm Chart version
0.8.1
with default values, with starboard operator version0.13.0
and0.13.1
, among a few previous versions. On any AWS EKS cluster I try to run the operator on that is version1.21
, the operator CrashLoops and throws this error:What did you expect to happen:
The Operator to start, become healthy, and begin scans.
Anything else you would like to add:
1.18
, but have never been able to get it to run on1.21
. The version of Kubernetes is the only variable I could find that that determines whether or not the operator will start. I could be misremembering, but I think this same error is thrown on my1.21
bare-metal homelab as well.Environment:
0.8.1
- lateststarboard version
):0.13.0
and0.13.1
kubectl version
):1.21
Ubuntu 20.04