k0sproject / k0s

k0s - The Zero Friction Kubernetes
https://docs.k0sproject.io
Other
3.48k stars 353 forks source link

kubectl contexts not recognized from .kube/config after machine reboot #2397

Closed shoce closed 1 year ago

shoce commented 1 year ago

Before creating an issue, make sure you've checked the following:

Platform

Linux 5.4.0-131-generic #147-Ubuntu SMP Fri Oct 14 17:07:22 UTC 2022 x86_64 GNU/Linux
NAME="Ubuntu"
VERSION="20.04.5 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.5 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal

Version

v1.25.3+k0s.0

Sysinfo

`k0s sysinfo` ``` Machine ID: "1a18b04e88097681cb5a54922f9e3a7bdeceabd41a6dc3a6d09836bc86da4b18" (from machine) (pass) Total memory: 15.3 GiB (pass) Disk space available for /var/lib/k0s: 89.1 GiB (pass) Operating system: Linux (pass) Linux kernel release: 5.4.0-131-generic (pass) Max. file descriptors per process: current: 1048576 / max: 1048576 (pass) Executable in path: modprobe: /usr/sbin/modprobe (pass) /proc file system: mounted (0x9fa0) (pass) Control Groups: version 1 (pass) cgroup controller "cpu": available (pass) cgroup controller "cpuacct": available (pass) cgroup controller "cpuset": available (pass) cgroup controller "memory": available (pass) cgroup controller "devices": available (pass) cgroup controller "freezer": available (pass) cgroup controller "pids": available (pass) cgroup controller "hugetlb": available (pass) cgroup controller "blkio": available (pass) CONFIG_CGROUPS: Control Group support: built-in (pass) CONFIG_CGROUP_FREEZER: Freezer cgroup subsystem: built-in (pass) CONFIG_CGROUP_PIDS: PIDs cgroup subsystem: built-in (pass) CONFIG_CGROUP_DEVICE: Device controller for cgroups: built-in (pass) CONFIG_CPUSETS: Cpuset support: built-in (pass) CONFIG_CGROUP_CPUACCT: Simple CPU accounting cgroup subsystem: built-in (pass) CONFIG_MEMCG: Memory Resource Controller for Control Groups: built-in (pass) CONFIG_CGROUP_HUGETLB: HugeTLB Resource Controller for Control Groups: built-in (pass) CONFIG_CGROUP_SCHED: Group CPU scheduler: built-in (pass) CONFIG_FAIR_GROUP_SCHED: Group scheduling for SCHED_OTHER: built-in (pass) CONFIG_CFS_BANDWIDTH: CPU bandwidth provisioning for FAIR_GROUP_SCHED: built-in (pass) CONFIG_BLK_CGROUP: Block IO controller: built-in (pass) CONFIG_NAMESPACES: Namespaces support: built-in (pass) CONFIG_UTS_NS: UTS namespace: built-in (pass) CONFIG_IPC_NS: IPC namespace: built-in (pass) CONFIG_PID_NS: PID namespace: built-in (pass) CONFIG_NET_NS: Network namespace: built-in (pass) CONFIG_NET: Networking support: built-in (pass) CONFIG_INET: TCP/IP networking: built-in (pass) CONFIG_IPV6: The IPv6 protocol: built-in (pass) CONFIG_NETFILTER: Network packet filtering framework (Netfilter): built-in (pass) CONFIG_NETFILTER_ADVANCED: Advanced netfilter configuration: built-in (pass) CONFIG_NETFILTER_XTABLES: Netfilter Xtables support: module (pass) CONFIG_NETFILTER_XT_TARGET_REDIRECT: REDIRECT target support: module (pass) CONFIG_NETFILTER_XT_MATCH_COMMENT: "comment" match support: module (pass) CONFIG_NETFILTER_XT_MARK: nfmark target and match support: module (pass) CONFIG_NETFILTER_XT_SET: set target and match support: module (pass) CONFIG_NETFILTER_XT_TARGET_MASQUERADE: MASQUERADE target support: module (pass) CONFIG_NETFILTER_XT_NAT: "SNAT and DNAT" targets support: module (pass) CONFIG_NETFILTER_XT_MATCH_ADDRTYPE: "addrtype" address type match support: module (pass) CONFIG_NETFILTER_XT_MATCH_CONNTRACK: "conntrack" connection tracking match support: module (pass) CONFIG_NETFILTER_XT_MATCH_MULTIPORT: "multiport" Multiple port match support: module (pass) CONFIG_NETFILTER_XT_MATCH_RECENT: "recent" match support: module (pass) CONFIG_NETFILTER_XT_MATCH_STATISTIC: "statistic" match support: module (pass) CONFIG_NETFILTER_NETLINK: module (pass) CONFIG_NF_CONNTRACK: Netfilter connection tracking support: module (pass) CONFIG_NF_NAT: module (pass) CONFIG_IP_SET: IP set support: module (pass) CONFIG_IP_SET_HASH_IP: hash:ip set support: module (pass) CONFIG_IP_SET_HASH_NET: hash:net set support: module (pass) CONFIG_IP_VS: IP virtual server support: module (pass) CONFIG_IP_VS_NFCT: Netfilter connection tracking: built-in (pass) CONFIG_NF_CONNTRACK_IPV4: IPv4 connetion tracking support (required for NAT): unknown (warning) CONFIG_NF_REJECT_IPV4: IPv4 packet rejection: module (pass) CONFIG_NF_NAT_IPV4: IPv4 NAT: unknown (warning) CONFIG_IP_NF_IPTABLES: IP tables support: module (pass) CONFIG_IP_NF_FILTER: Packet filtering: module (pass) CONFIG_IP_NF_TARGET_REJECT: REJECT target support: module (pass) CONFIG_IP_NF_NAT: iptables NAT support: module (pass) CONFIG_IP_NF_MANGLE: Packet mangling: module (pass) CONFIG_NF_DEFRAG_IPV4: module (pass) CONFIG_NF_CONNTRACK_IPV6: IPv6 connetion tracking support (required for NAT): unknown (warning) CONFIG_NF_NAT_IPV6: IPv6 NAT: unknown (warning) CONFIG_IP6_NF_IPTABLES: IP6 tables support: module (pass) CONFIG_IP6_NF_FILTER: Packet filtering: module (pass) CONFIG_IP6_NF_MANGLE: Packet mangling: module (pass) CONFIG_IP6_NF_NAT: ip6tables NAT support: module (pass) CONFIG_NF_DEFRAG_IPV6: module (pass) CONFIG_BRIDGE: 802.1d Ethernet Bridging: module (pass) CONFIG_LLC: module (pass) CONFIG_STP: module (pass) CONFIG_EXT4_FS: The Extended 4 (ext4) filesystem: built-in (pass) CONFIG_PROC_FS: /proc file system support: built-in (pass) ```

What happened?

I keep KUBECONFIG env var empty so /root/.kube/config file should be used per docs. I set few contexts using k0s kubectl config set-context and use them with no problem. I see them in /root/.kube/config yaml file. After machine reboot KUBECONFIG env var stays the same empty, the /root/.kube/config yaml file still has the list of contexts set before the reboot. But k0s kubectl config get-contexts returns only "Default" context and trying to switch to a context that was set before the machine reboot gives a message error: no context exists with the name: "dev-1".

Steps to reproduce

  1. keep KUBECONFIG env var empty
  2. k0s kubectl config set-context dev-1 --cluster=local --user=user --namespace=dev-1
  3. check the context is present in the $HOME/.kube/config yaml file
  4. reboot the machine
  5. check the dev-1 context is present in the $HOME/.kube/config yaml file
  6. k0s kubectl config get-contexts will show only "Default" context
  7. k0s kubectl config use-context dev-1 will show the message error: no context exists with the name: "dev-1"

Expected behavior

Expected to see the contexts set before the reboot in the listing of k0s kubectl config get-contexts and use them with k0s kubectl config use-context.

Actual behavior

k0s kubectl config get-contexts lists only "Default" context. k0s kubectl config use-context gives error message error: no context exists with the name: "dev-1"

Screenshots and logs

No response

Additional context

No response

shoce commented 1 year ago

Now I am thinking it might be related to /var/lib/k0s/pki/admin.conf file existence, still investigating.

jnummelin commented 1 year ago

k0s kubectl ... will always try to use the admin kubeconfig from /var/lib/k0s/pki/admin.conf. And as you found out, it does not have any other context that the default one. It does it by setting KUBECONFIG env var internally before invoking the kubectl commands. So the issue is, k0s kubectl ... is not really using the config from home dir at all!

Why the context then disappers in reboot? k0s itself re-generates some of the certs on every restart of k0s, which is of course triggered via reboot in this case. So reboot really gets the admin.conf into pristine state.

The admin.conf is really meant as a "break-the-glass" access config. So rather than copying it over to users home dirs, I'd propose to create a more suitable access config for that purpose. So say you want to create access config for someone with "admin" role:

k0s kubeconfig create some_user --groups system:masters

Just keep in mind that k0s can only create access configs with certificate access. That means that you have really no means to revoke that access until the certs expire.

github-actions[bot] commented 1 year ago

The issue is marked as stale since no activity has been recorded in 30 days

jnummelin commented 1 year ago

I don't think theres anything actionable for k0s in this, hence closing. Feel free to re-open if I'm mistaken.

shoce commented 1 year ago

@jnummelin

thank you for such a great answer. it took me some time to understand how it works and now i have finally implemented the solution suggested by you. i have removed a symlink from $HOME/.kube/config to /var/lib/k0s/pki/admin.conf and have created a new config with k0s kubeconfig create.

helm works fine now.

from what you said i understood that k0s kubectl sets KUBECONFIG internally to the admin.conf and i thought it is impossible to use another config but when i set KUBECONFIG=$HOME/.kube/config k0s kubectl works fine as well, surprise.

so now everything seems working perfect for me, thank you.