kubernetes-sigs / kind

Kubernetes IN Docker - local clusters for testing Kubernetes
https://kind.sigs.k8s.io/
Apache License 2.0
13.5k stars 1.56k forks source link

ERRO[0000] failed to config kubernetes client error="unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined" #3790

Open fly78lv opened 4 days ago

fly78lv commented 4 days ago

What happened:I am attempting to run sudo -E make run in my Ubuntu VM (running on VMware) where I have created two local Kubernetes clusters using kind. However, I encounter an error related to the Kubernetes client configuration.

The error message indicates that the in-cluster configuration cannot be loaded because the KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT environment variables are not defined.

st@st-virtual-machine:~/Desktop/step$ sudo -E make run
test -s /home/st/Desktop/step/bin/controller-gen && /home/st/Desktop/step/bin/controller-gen --version | grep -q v0.11.1 || \
GOBIN=/home/st/Desktop/step/bin go install sigs.k8s.io/controller-tools/cmd/controller-gen@v0.11.1
/home/st/Desktop/step/bin/controller-gen rbac:roleName=step-operator-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
/home/st/Desktop/step/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
go fmt ./...
go vet ./...
go run ./main.go
INFO[0000] trigger client auth inside cluster           
ERRO[0000] failed to config kubernetes client            error="unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined"
FATA[0000] failed to create task puppet trigger k8s client  error="unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined"
exit status 1
make: *** [Makefile:120: run] Error 1
/main.go 
/*
Copyright 2023.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/

package main

import (
    "flag"
    "os"
    "step/global/config"

    // Import all Kubernetes client auth plugins (e.g. Azure, GCP, OIDC, etc.)
    // to ensure that exec-entrypoint and run can make use of them.
    _ "k8s.io/client-go/plugin/pkg/client/auth"

    "k8s.io/apimachinery/pkg/runtime"
    utilruntime "k8s.io/apimachinery/pkg/util/runtime"
    clientgoscheme "k8s.io/client-go/kubernetes/scheme"
    ctrl "sigs.k8s.io/controller-runtime"
    "sigs.k8s.io/controller-runtime/pkg/healthz"
    "sigs.k8s.io/controller-runtime/pkg/log/zap"

    functionv1alpha1 "step/apis/function/v1alpha1"
    projectv1alpha1 "step/apis/project/v1alpha1"
    functioncontrollers "step/controllers/function"
    projectcontrollers "step/controllers/project"
    //+kubebuilder:scaffold:imports
)

var (
    scheme   = runtime.NewScheme()
    setupLog = ctrl.Log.WithName("setup")
)

func init() {
    utilruntime.Must(clientgoscheme.AddToScheme(scheme))

    utilruntime.Must(functionv1alpha1.AddToScheme(scheme))
    utilruntime.Must(projectv1alpha1.AddToScheme(scheme))
    //+kubebuilder:scaffold:scheme
}

func main() {
    config.Print()
    var metricsAddr string
    var enableLeaderElection bool
    var probeAddr string
    flag.StringVar(&metricsAddr, "metrics-bind-address", ":8080", "The address the metric endpoint binds to.")
    flag.StringVar(&probeAddr, "health-probe-bind-address", ":8081", "The address the probe endpoint binds to.")
    flag.BoolVar(&enableLeaderElection, "leader-elect", false,
        "Enable leader election for controller manager. "+
            "Enabling this will ensure there is only one active controller manager.")
    opts := zap.Options{
        Development: true,
    }
    opts.BindFlags(flag.CommandLine)
    flag.Parse()

    ctrl.SetLogger(zap.New(zap.UseFlagOptions(&opts)))

    mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
        Scheme:                 scheme,
        MetricsBindAddress:     metricsAddr,
        Port:                   9443,
        HealthProbeBindAddress: probeAddr,
        LeaderElection:         enableLeaderElection,
        LeaderElectionID:       "c7441a0f.step.airanthem.cn",
        // LeaderElectionReleaseOnCancel defines if the leader should step down voluntarily
        // when the Manager ends. This requires the binary to immediately end when the
        // Manager is stopped, otherwise, this setting is unsafe. Setting this significantly
        // speeds up voluntary leader transitions as the new leader don't have to wait
        // LeaseDuration time first.
        //
        // In the default scaffold provided, the program ends immediately after
        // the manager stops, so would be fine to enable this option. However,
        // if you are doing or is intended to do any operation such as perform cleanups
        // after the manager stops then its usage might be unsafe.
        // LeaderElectionReleaseOnCancel: true,
    })
    if err != nil {
        setupLog.Error(err, "unable to start manager")
        os.Exit(1)
    }

    if err = (&functioncontrollers.FunctionSetReconciler{
        Client: mgr.GetClient(),
        Scheme: mgr.GetScheme(),
    }).SetupWithManager(mgr); err != nil {
        setupLog.Error(err, "unable to create controller", "controller", "FunctionSet")
        os.Exit(1)
    }
    if err = (&projectcontrollers.ProjectReconciler{
        Client: mgr.GetClient(),
        Scheme: mgr.GetScheme(),
    }).SetupWithManager(mgr); err != nil {
        setupLog.Error(err, "unable to create controller", "controller", "Project")
        os.Exit(1)
    }
    if err = (&projectcontrollers.TaskReconciler{
        Client: mgr.GetClient(),
        Scheme: mgr.GetScheme(),
    }).SetupWithManager(mgr); err != nil {
        setupLog.Error(err, "unable to create controller", "controller", "Task")
        os.Exit(1)
    }
    //+kubebuilder:scaffold:builder

    if err := mgr.AddHealthzCheck("healthz", healthz.Ping); err != nil {
        setupLog.Error(err, "unable to set up health check")
        os.Exit(1)
    }
    if err := mgr.AddReadyzCheck("readyz", healthz.Ping); err != nil {
        setupLog.Error(err, "unable to set up ready check")
        os.Exit(1)
    }

    setupLog.Info("starting manager")
    if err := mgr.Start(ctrl.SetupSignalHandler()); err != nil {
        setupLog.Error(err, "problem running manager")
        os.Exit(1)
    }
}

What you expected to happen:start successful

How to reproduce it (as minimally and precisely as possible): Create two local clusters using kind. Run make install to create some custom CRDs. I can assure you that this step works without issues. Run make run. My local cluster information is as follows. I believe their APIs have all started successfully. Additionally, when I curl 127.0.0.1:32907, it displays "Client sent an HTTP request to an HTTPS server."

st@st-virtual-machine:~/Desktop/step$ sudo -E kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://127.0.0.1:32907
  name: kind-cluster-1
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://127.0.0.1:38139
  name: kind-cluster-2
contexts:
- context:
    cluster: kind-cluster-1
    user: kind-cluster-1
  name: kind-cluster-1
- context:
    cluster: kind-cluster-2
    user: kind-cluster-2
  name: kind-cluster-2
current-context: kind-cluster-2
kind: Config
preferences: {}
users:
- name: kind-cluster-1
  user:
    client-certificate-data: DATA+OMITTED
    client-key-data: DATA+OMITTED
- name: kind-cluster-2
  user:
    client-certificate-data: DATA+OMITTED
    client-key-data: DATA+OMITTED

Anything else we need to know?: "I think I've identified the problem. It seems to be related to in-cluster deployment. It needs to be deployed within the local cluster to be used. How should I solve this issue?" Environment:

BenTheElder commented 4 days ago

"sudo -E make run" is not anything from the project, this is not reproducible.

In-cluster pod credential mounting is not different in kind from upstream kubernetes.

BenTheElder commented 4 days ago

/remove-kind bug /kind support /triage needs-information