Closed byteshiva closed 7 months ago
Hi @byteshiva thanks for opening this issue! Can you please provide some logs of the node containers? Specifically of the server container and the first agent container?
Hi @byteshiva thanks for opening this issue! Can you please provide some logs of the node containers? Specifically of the server container and the first agent container?
```
k3d cluster create sample --trace --verbose
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock
DEBU[0000] Runtime Info:
&{Name:docker Endpoint:/var/run/docker.sock Version:20.10.25 OSType:linux OS:NixOS 23.05 (Stoat) Arch:x86_64 CgroupVersion:2 CgroupDriver:systemd Filesystem:extfs InfoName:nixos}
DEBU[0000] Additional CLI Configuration:
cli:
api-port: ""
env: []
k3s-node-labels: []
k3sargs: []
ports: []
registries:
create: ""
runtime-labels: []
runtime-ulimits: []
volumes: []
hostaliases: []
DEBU[0000] Configuration:
agents: 0
image: docker.io/rancher/k3s:v1.21.7-k3s1
network: ""
options:
k3d:
disableimagevolume: false
disableloadbalancer: false
disablerollback: false
loadbalancer:
configoverrides: []
timeout: 0s
wait: true
kubeconfig:
switchcurrentcontext: true
updatedefaultkubeconfig: true
runtime:
agentsmemory: ""
gpurequest: ""
hostpidmode: false
serversmemory: ""
registries:
config: ""
use: []
servers: 1
subnet: ""
token: ""
TRAC[0000] Trying to read config apiVersion='k3d.io/v1alpha5', kind='simple'
DEBU[0000] ========== Simple Config ==========
{TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha5} ObjectMeta:{Name:} Servers:1 Agents:0 ExposeAPI:{Host: HostIP: HostPort:} Image:docker.io/rancher/k3s:v1.21.7-k3s1 Network: Subnet: ClusterToken: Volumes:[] Ports:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory: AgentsMemory: HostPidMode:false Labels:[] Ulimits:[]}} Env:[] Registries:{Use:[] Create:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3cd62d8068f8 rancher/k3s:v1.21.7-k3s1 "/bin/k3d-entrypoint…" About a minute ago Up About a minute k3d-sample-server-0
``` docker logs 3cd62d8068f8 time="2024-04-02T11:46:53.250125259Z" level=info msg="Starting k3s v1.21.7+k3s1 (ac705709)" time="2024-04-02T11:46:53.253900045Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s" time="2024-04-02T11:46:53.253924145Z" level=info msg="Configuring database table schema and indexes, this may take a moment..." time="2024-04-02T11:46:53.255558150Z" level=info msg="Database tables and indexes are up to date" time="2024-04-02T11:46:53.256688498Z" level=info msg="Kine listening on unix://kine.sock" The connection to the server localhost:8080 was refused - did you specify the right host or port? time="2024-04-02T11:46:53.264174013Z" level=info msg="certificate CN=system:admin,O=system:masters signed by CN=k3s-client-ca@1712058413: notBefore=2024-04-02 11:46:53 +0000 UTC notAfter=2025-04-02 11:46:53 +0000 UTC" time="2024-04-02T11:46:53.264479924Z" level=info msg="certificate CN=system:kube-controller-manager signed by CN=k3s-client-ca@1712058413: notBefore=2024-04-02 11:46:53 +0000 UTC notAfter=2025-04-02 11:46:53 +0000 UTC" time="2024-04-02T11:46:53.264770166Z" level=info msg="certificate CN=system:kube-scheduler signed by CN=k3s-client-ca@1712058413: notBefore=2024-04-02 11:46:53 +0000 UTC notAfter=2025-04-02 11:46:53 +0000 UTC" time="2024-04-02T11:46:53.265063088Z" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-client-ca@1712058413: notBefore=2024-04-02 11:46:53 +0000 UTC notAfter=2025-04-02 11:46:53 +0000 UTC" time="2024-04-02T11:46:53.265352520Z" level=info msg="certificate CN=system:kube-proxy signed by CN=k3s-client-ca@1712058413: notBefore=2024-04-02 11:46:53 +0000 UTC notAfter=2025-04-02 11:46:53 +0000 UTC" time="2024-04-02T11:46:53.265613003Z" level=info msg="certificate CN=system:k3s-controller signed by CN=k3s-client-ca@1712058413: notBefore=2024-04-02 11:46:53 +0000 UTC notAfter=2025-04-02 11:46:53 +0000 UTC" time="2024-04-02T11:46:53.265907285Z" level=info msg="certificate CN=k3s-cloud-controller-manager signed by CN=k3s-client-ca@1712058413: notBefore=2024-04-02 11:46:53 +0000 UTC notAfter=2025-04-02 11:46:53 +0000 UTC" time="2024-04-02T11:46:53.266424011Z" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-server-ca@1712058413: notBefore=2024-04-02 11:46:53 +0000 UTC notAfter=2025-04-02 11:46:53 +0000 UTC" time="2024-04-02T11:46:53.266917067Z" level=info msg="certificate CN=system:auth-proxy signed by CN=k3s-request-header-ca@1712058413: notBefore=2024-04-02 11:46:53 +0000 UTC notAfter=2025-04-02 11:46:53 +0000 UTC" time="2024-04-02T11:46:53.267510871Z" level=info msg="certificate CN=etcd-server signed by CN=etcd-server-ca@1712058413: notBefore=2024-04-02 11:46:53 +0000 UTC notAfter=2025-04-02 11:46:53 +0000 UTC" time="2024-04-02T11:46:53.267778203Z" level=info msg="certificate CN=etcd-client signed by CN=etcd-server-ca@1712058413: notBefore=2024-04-02 11:46:53 +0000 UTC notAfter=2025-04-02 11:46:53 +0000 UTC" time="2024-04-02T11:46:53.268244810Z" level=info msg="certificate CN=etcd-peer signed by CN=etcd-peer-ca@1712058413: notBefore=2024-04-02 11:46:53 +0000 UTC notAfter=2025-04-02 11:46:53 +0000 UTC" time="2024-04-02T11:46:53.323266356Z" level=info msg="certificate CN=k3s,O=k3s signed by CN=k3s-server-ca@1712058413: notBefore=2024-04-02 11:46:53 +0000 UTC notAfter=2025-04-02 11:46:53 +0000 UTC" time="2024-04-02T11:46:53.323463981Z" level=info msg="Active TLS secret (ver=) (count 11): map[listener.cattle.io/cn-0.0.0.0:0.0.0.0 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.23.0.2:172.23.0.2 listener.cattle.io/cn-k3d-sample-server-0:k3d-sample-server-0 listener.cattle.io/cn-k3d-sample-serverlb:k3d-sample-serverlb listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=2EB54B637735D75364D47C54A1661D45ECC9DD8D]" time="2024-04-02T11:46:53.326516247Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24. I0402 11:46:53.327562 23 server.go:656] external host was not specified, using 172.23.0.2 I0402 11:46:53.327676 23 server.go:195] Version: v1.21.7+k3s1 time="2024-04-02T11:46:53.328635248Z" level=info msg="Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --profiling=false --secure-port=0" time="2024-04-02T11:46:53.328699297Z" level=info msg="Waiting for API server to become available" time="2024-04-02T11:46:53.328894831Z" level=info msg="Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true" time="2024-04-02T11:46:53.329155104Z" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --node-status-update-frequency=1m0s --port=0 --profiling=false" time="2024-04-02T11:46:53.329498625Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token" time="2024-04-02T11:46:53.329532594Z" level=info msg="To join node to cluster: k3s agent -s https://172.23.0.2:6443 -t ${NODE_TOKEN}" time="2024-04-02T11:46:53.330009301Z" level=info msg="Wrote kubeconfig /output/kubeconfig.yaml" time="2024-04-02T11:46:53.330103808Z" level=info msg="Run: k3s kubectl" time="2024-04-02T11:46:53.330391070Z" level=fatal msg="failed to find cpuset cgroup (v2)" The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port? The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port? The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port? The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port? The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port? The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port? The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port? The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port? The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port? The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port? The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port? The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port? The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port? The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port? The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port? The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port? The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port? The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port? The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port? The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port? The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port? The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port? The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port? The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port? The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port? ```
Reference:
Reference:
1. [nixos.wiki/wiki/K3s](https://nixos.wiki/wiki/K3s)
Since you reference the nixos Wiki - did you try https://nixos.wiki/wiki/K3s#Raspberry_Pi_not_working which corresponds to the fatal log of the server container?
Reference:
1. [nixos.wiki/wiki/K3s](https://nixos.wiki/wiki/K3s)
Since you reference the nixos Wiki - did you try https://nixos.wiki/wiki/K3s#Raspberry_Pi_not_working which corresponds to the fatal log of the server container?
Despite configuring the system with the appropriate kernel parameters and setting up the Nix shell as per the provided script, I'm consistently encountering connection refusal errors when attempting to connect to the server.
Steps to Reproduce:
1. Created a new Nix shell using the provided script `run.sh`. ```bash cat run.sh export NIXPKGS_ALLOW_UNFREE=1 nix-shell -E ' let nixpkgs = import (fetchTarball "https://github.com/NixOS/nixpkgs/archive/nixos-unstable.tar.gz") {}; in nixpkgs.mkShell { buildInputs = with nixpkgs; [ k3d k3s docker containerd runc ]; shellHook = "export KUBECONFIG=kubeconfig"; }' ``` 2. Applied necessary kernel parameters in NixOS configuration: ```nix boot.kernelParams = [ "cgroup_enable=cpuset" "cgroup_memory=1" "cgroup_enable=memory" ]; ``` 3. Despite the above configurations, attempting to connect to the K3s server results in the following error: ``` time="2024-04-02T12:11:02.735710245Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token" time="2024-04-02T12:11:02.735733706Z" level=info msg="To join node to cluster: k3s agent -s https://172.23.0.2:6443 -t ${NODE_TOKEN}" time="2024-04-02T12:11:02.736267310Z" level=info msg="Wrote kubeconfig /output/kubeconfig.yaml" time="2024-04-02T12:11:02.736346042Z" level=info msg="Run: k3s kubectl" time="2024-04-02T12:11:02.736397044Z" level=fatal msg="failed to find cpuset cgroup (v2)" The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port? ``` ``` cat /etc/nixos/configuration.nix networking.firewall.allowedTCPPorts = [6443]; ``` ``` cat /etc/nixos/hardware-configuration.nix boot.kernelParams = [ "cgroup_enable=cpuset" "cgroup_memory=1" "cgroup_enable=memory" ]; ```
Description: Creating a Kubernetes cluster with k3d on NixOS fails during the server node startup, leaving the cluster creation incomplete.
Steps to Reproduce:
sample.sh
.Step 1 - Details
**sample.sh** ``` export NIXPKGS_ALLOW_UNFREE=1 nix-shell -E ' let nixpkgs = import (fetchTarball "https://github.com/NixOS/nixpkgs/archive/nixos-unstable.tar.gz") {}; in nixpkgs.mkShell { buildInputs = with nixpkgs; [ k3d kubectl kubernetes-helm docker ]; shellHook = "export KUBECONFIG=kubeconfig"; }' ```
sample.sh
on a NixOS environment.Step 4: - Details
**Error** **k3d cluster create --api-port 6550 -p "8081:80@loadbalancer" --agents 2** ``` INFO[0000] portmapping '8081:80' targets the loadbalancer: defaulting to [servers:*:proxy agents:*:proxy] INFO[0000] Prep: Network INFO[0000] Created network 'k3d-k3s-default' INFO[0000] Created image volume k3d-k3s-default-images INFO[0000] Starting new tools node... INFO[0000] Starting Node 'k3d-k3s-default-tools' INFO[0001] Creating node 'k3d-k3s-default-server-0' INFO[0004] Pulling image 'docker.io/rancher/k3s:v1.21.7-k3s1' INFO[0019] Creating node 'k3d-k3s-default-agent-0' INFO[0019] Creating node 'k3d-k3s-default-agent-1' INFO[0019] Creating LoadBalancer 'k3d-k3s-default-serverlb' INFO[0019] Using the k3d-tools node to gather environment information INFO[0019] HostIP: using network gateway 172.26.0.1 address INFO[0019] Starting cluster 'k3s-default' INFO[0019] Starting servers... INFO[0019] Starting Node 'k3d-k3s-default-server-0' ``` **Extra:** **k3d cluster list --verbose --trace** ``` DEBU[0000] DOCKER_SOCK=/var/run/docker.sock DEBU[0000] Runtime Info: &{Name:docker Endpoint:/var/run/docker.sock Version:20.10.25 OSType:linux OS:NixOS 23.05 (Stoat) Arch:x86_64 CgroupVersion:2 CgroupDriver:systemd Filesystem:extfs InfoName:nixos} TRAC[0000] Listing Clusters... TRAC[0000] TranslateContainerDetailsToNode: Checking for default object label app=k3d on container /k3d-k3s-default-serverlb TRAC[0000] TranslateContainerDetailsToNode: Checking for default object label app=k3d on container /k3d-k3s-default-agent-1 TRAC[0000] TranslateContainerDetailsToNode: Checking for default object label app=k3d on container /k3d-k3s-default-agent-0 TRAC[0000] TranslateContainerDetailsToNode: Checking for default object label app=k3d on container /k3d-k3s-default-server-0 DEBU[0000] Found 4 nodes TRAC[0000] Found node k3d-k3s-default-serverlb of role loadbalancer TRAC[0000] Found node k3d-k3s-default-agent-1 of role agent TRAC[0000] Found node k3d-k3s-default-agent-0 of role agent TRAC[0000] Found node k3d-k3s-default-server-0 of role server TRAC[0000] Filteres 4 nodes by roles (in: [server agent loadbalancer] | ex: [registry]), got 4 left TRAC[0000] Found 4 cluster-internal nodes TRAC[0000] Found cluster-internal node k3d-k3s-default-serverlb of role loadbalancer belonging to cluster k3s-default TRAC[0000] Found cluster-internal node k3d-k3s-default-agent-1 of role agent belonging to cluster k3s-default TRAC[0000] Found cluster-internal node k3d-k3s-default-agent-0 of role agent belonging to cluster k3s-default TRAC[0000] Found cluster-internal node k3d-k3s-default-server-0 of role server belonging to cluster k3s-default DEBU[0000] Found 1 clusters NAME SERVERS AGENTS LOADBALANCER k3s-default 1/1 0/2 true ```
Environment:
Error Message:
Workaround: No known workaround exists currently. Users are unable to create Kubernetes clusters using k3d on NixOS until this issue is resolved.
Logs - Creating Sample Cluster on NIXOS
``` k3d cluster create sample INFO[0000] Prep: Network INFO[0000] Created network 'k3d-sample' INFO[0000] Created image volume k3d-sample-images INFO[0000] Starting new tools node... INFO[0000] Starting Node 'k3d-sample-tools' INFO[0001] Creating node 'k3d-sample-server-0' INFO[0001] Creating LoadBalancer 'k3d-sample-serverlb' INFO[0001] Using the k3d-tools node to gather environment information INFO[0001] HostIP: using network gateway 172.28.0.1 address INFO[0001] Starting cluster 'sample' INFO[0001] Starting servers... INFO[0001] Starting Node 'k3d-sample-server-0' ^C ``` ``` [nix-shell:~/app]$ k3d cluster delete sample --verbose --trace DEBU[0000] DOCKER_SOCK=/var/run/docker.sock DEBU[0000] Runtime Info: &{Name:docker Endpoint:/var/run/docker.sock Version:20.10.25 OSType:linux OS:NixOS 23.05 (Stoat) Arch:x86_64 CgroupVersion:2 CgroupDriver:systemd Filesystem:extfs InfoName:nixos} DEBU[0000] Configuration: {} TRAC[0000] TranslateContainerDetailsToNode: Checking for default object label app=k3d on container /k3d-sample-serverlb TRAC[0000] TranslateContainerDetailsToNode: Checking for default object label app=k3d on container /k3d-sample-server-0 TRAC[0000] Reading path /etc/confd/values.yaml from node k3d-sample-serverlb... ERRO[0000] error getting loadbalancer config from k3d-sample-serverlb: runtime failed to read loadbalancer config '/etc/confd/values.yaml' from node 'k3d-sample-serverlb': Error response from daemon: Could not find the file /etc/confd/values.yaml in container 54f40f1d37d4f8e818979ab075f9ffe0007abc955019a8e157a73a2ba3aeba85: file not found INFO[0000] Deleting cluster 'sample' TRAC[0000] TranslateContainerDetailsToNode: Checking for default object label app=k3d on container /k3d-sample-serverlb TRAC[0000] TranslateContainerDetailsToNode: Checking for default object label app=k3d on container /k3d-sample-server-0 DEBU[0000] Cluster Details: &{Name:sample Network:{Name:k3d-sample ID: External:false IPAM:{IPPrefix:invalid Prefix IPsUsed:[] Managed:false} Members:[]} Token:DXLYiwAZRCPUYDfwtOdy Nodes:[0xc00051fba0 0xc000017040] InitNode: ExternalDatastore: KubeAPI: ServerLoadBalancer:0xc000241920 ImageVolume:k3d-sample-images Volumes:[k3d-sample-images]}
DEBU[0000] Deleting node k3d-sample-serverlb ...
TRAC[0000] [Docker] Deleted Container k3d-sample-serverlb
DEBU[0000] Deleting node k3d-sample-server-0 ...
TRAC[0000] [Docker] Deleted Container k3d-sample-server-0
INFO[0000] Deleting cluster network 'k3d-sample'
INFO[0000] Deleting 1 attached volumes...
DEBU[0000] Deleting volume k3d-sample-images...
INFO[0000] Removing cluster details from default kubeconfig...
DEBU[0000] Using default kubeconfig 'kubeconfig'
DEBU[0000] Wrote kubeconfig to 'kubeconfig'
INFO[0000] Removing standalone kubeconfig file (if there is one)...
INFO[0000] Successfully deleted cluster sample!
```