Closed rtsp closed 2 years ago
@rtsp hello!
What do you want to achieve by using tunneled networking mode? If I recall correctly the argument is not described in the documentation.
The thing is that with that mode enabled, the konnectivity-agent runs in the host network and starts to proxify all calls to kube-api through localhost:6443
But in case of controller+worker mode, it is first of all useless (controller is already running on the same machine) plus breaks, if default kube-api port is used.
There is a pr opened to improve user experience and add validation message in case if tunneled networking is enabled and default port is in use, #2153 2153
@soider Thanks for investigating
I understand that this mode shouldn't enable on controller+worker since the api port is already listen on controller
But in my case, my setup is like this
I expect that when I enabled on tunnel networking, it should enable only on those 3x worker nodes and not enabled controller+worker nodes.
I just realized I missed the "expected behavior" part from my issue report, so I added:
tunneledNetworkingMode, when enabled, should exclude controller+worker nodes automatically and include only worker-only nodes
@rtsp I am not sure it would work like that, it modifies global cluster state, so it is either enabled or disabled.
Meanwhile, what are you trying to achieve?
@rtsp I am not sure it would work like that, it modifies global cluster state, so it is either enabled or disabled.
Yeah, you're right actually. It couldn't solve directly like that.
Meanwhile, what are you trying to achieve?
This may sound ridiculous, but I'm trying to solve network policy problem on this network by trunking all API request to originate from single process so they (those policies applier) can tap into this pod and scan™ for the malicious API requests.
I'm not even sure is this going to work but I stumbled on CrashLoop problem in this report so I just disable this tunnel mode and will try to solve policies problem later.
hm but how tunneled networking would help you?
The purpose of that settings is to make kubelet use tunnel to kube-apiserver instead of relying on direct connection.
hm but how tunneled networking would help you?
I think (from the code) that this setting should make kube-api available from worker nodes at *:6443 so I can point another services (outside kube), that trying to call kube-api, to the worker nodes instead of direct connect to controller's virtual IP.
The purpose of that settings is to make kubelet use tunnel to kube-apiserver instead of relying on direct connection.
Seems like it's not documented yet. I'm also curious about the purpose of using proxied tunnel instead of direct connection. I guess may be trying to get rid of load balancer in front of HA controllers?
Just realized that in controller+worker
mode (in most case), we can disable the konnectivity stack because all controller nodes also running CNI and able to talk with all worker nodes directly.
Before creating an issue, make sure you've checked the following:
Platform
Version
v1.24.4+k0s.0
Sysinfo
`k0s sysinfo`
```text Machine ID: "37da42615beebef759aae954a019c510a0cca7a53254792205c64bd94168634c" (from machine) (pass) Total memory: 3.8 GiB (pass) Disk space available for /var/lib/k0s: 50.4 GiB (pass) Operating system: Linux (pass) Linux kernel release: 5.10.0-17-amd64 (pass) Max. file descriptors per process: current: 1024 / max: 1048576 (warning: < 65536) Executable in path: modprobe: /usr/sbin/modprobe (pass) /proc file system: mounted (0x9fa0) (pass) Control Groups: version 2 (pass) cgroup controller "cpu": available (pass) cgroup controller "cpuacct": available (via cpu in version 2) (pass) cgroup controller "cpuset": available (pass) cgroup controller "memory": available (pass) cgroup controller "devices": available (assumed) (pass) cgroup controller "freezer": available (assumed) (pass) cgroup controller "pids": available (pass) cgroup controller "hugetlb": available (pass) cgroup controller "blkio": available (via io in version 2) (pass) CONFIG_CGROUPS: Control Group support: built-in (pass) CONFIG_CGROUP_FREEZER: Freezer cgroup subsystem: built-in (pass) CONFIG_CGROUP_PIDS: PIDs cgroup subsystem: built-in (pass) CONFIG_CGROUP_DEVICE: Device controller for cgroups: built-in (pass) CONFIG_CPUSETS: Cpuset support: built-in (pass) CONFIG_CGROUP_CPUACCT: Simple CPU accounting cgroup subsystem: built-in (pass) CONFIG_MEMCG: Memory Resource Controller for Control Groups: built-in (pass) CONFIG_CGROUP_HUGETLB: HugeTLB Resource Controller for Control Groups: built-in (pass) CONFIG_CGROUP_SCHED: Group CPU scheduler: built-in (pass) CONFIG_FAIR_GROUP_SCHED: Group scheduling for SCHED_OTHER: built-in (pass) CONFIG_CFS_BANDWIDTH: CPU bandwidth provisioning for FAIR_GROUP_SCHED: built-in (pass) CONFIG_BLK_CGROUP: Block IO controller: built-in (pass) CONFIG_NAMESPACES: Namespaces support: built-in (pass) CONFIG_UTS_NS: UTS namespace: built-in (pass) CONFIG_IPC_NS: IPC namespace: built-in (pass) CONFIG_PID_NS: PID namespace: built-in (pass) CONFIG_NET_NS: Network namespace: built-in (pass) CONFIG_NET: Networking support: built-in (pass) CONFIG_INET: TCP/IP networking: built-in (pass) CONFIG_IPV6: The IPv6 protocol: built-in (pass) CONFIG_NETFILTER: Network packet filtering framework (Netfilter): built-in (pass) CONFIG_NETFILTER_ADVANCED: Advanced netfilter configuration: built-in (pass) CONFIG_NETFILTER_XTABLES: Netfilter Xtables support: module (pass) CONFIG_NETFILTER_XT_TARGET_REDIRECT: REDIRECT target support: module (pass) CONFIG_NETFILTER_XT_MATCH_COMMENT: "comment" match support: module (pass) CONFIG_NETFILTER_XT_MARK: nfmark target and match support: module (pass) CONFIG_NETFILTER_XT_SET: set target and match support: module (pass) CONFIG_NETFILTER_XT_TARGET_MASQUERADE: MASQUERADE target support: module (pass) CONFIG_NETFILTER_XT_NAT: "SNAT and DNAT" targets support: module (pass) CONFIG_NETFILTER_XT_MATCH_ADDRTYPE: "addrtype" address type match support: module (pass) CONFIG_NETFILTER_XT_MATCH_CONNTRACK: "conntrack" connection tracking match support: module (pass) CONFIG_NETFILTER_XT_MATCH_MULTIPORT: "multiport" Multiple port match support: module (pass) CONFIG_NETFILTER_XT_MATCH_RECENT: "recent" match support: module (pass) CONFIG_NETFILTER_XT_MATCH_STATISTIC: "statistic" match support: module (pass) CONFIG_NETFILTER_NETLINK: module (pass) CONFIG_NF_CONNTRACK: Netfilter connection tracking support: module (pass) CONFIG_NF_NAT: module (pass) CONFIG_IP_SET: IP set support: module (pass) CONFIG_IP_SET_HASH_IP: hash:ip set support: module (pass) CONFIG_IP_SET_HASH_NET: hash:net set support: module (pass) CONFIG_IP_VS: IP virtual server support: module (pass) CONFIG_IP_VS_NFCT: Netfilter connection tracking: built-in (pass) CONFIG_NF_CONNTRACK_IPV4: IPv4 connetion tracking support (required for NAT): unknown (warning) CONFIG_NF_REJECT_IPV4: IPv4 packet rejection: module (pass) CONFIG_NF_NAT_IPV4: IPv4 NAT: unknown (warning) CONFIG_IP_NF_IPTABLES: IP tables support: module (pass) CONFIG_IP_NF_FILTER: Packet filtering: module (pass) CONFIG_IP_NF_TARGET_REJECT: REJECT target support: module (pass) CONFIG_IP_NF_NAT: iptables NAT support: module (pass) CONFIG_IP_NF_MANGLE: Packet mangling: module (pass) CONFIG_NF_DEFRAG_IPV4: module (pass) CONFIG_NF_CONNTRACK_IPV6: IPv6 connetion tracking support (required for NAT): unknown (warning) CONFIG_NF_NAT_IPV6: IPv6 NAT: unknown (warning) CONFIG_IP6_NF_IPTABLES: IP6 tables support: module (pass) CONFIG_IP6_NF_FILTER: Packet filtering: module (pass) CONFIG_IP6_NF_MANGLE: Packet mangling: module (pass) CONFIG_IP6_NF_NAT: ip6tables NAT support: module (pass) CONFIG_NF_DEFRAG_IPV6: module (pass) CONFIG_BRIDGE: 802.1d Ethernet Bridging: module (pass) CONFIG_LLC: module (pass) CONFIG_STP: module (pass) CONFIG_EXT4_FS: The Extended 4 (ext4) filesystem: module (pass) CONFIG_PROC_FS: /proc file system support: built-in (pass) ```What happened?
After deploy with these conditions
role: controller+worker
--and--tunneledNetworkingMode: true
The
konnectivity-agent
on all controller nodes stuck withCrashLoopBackoff
and shown theses logs.Steps to reproduce
Deploy k0s cluster with
controller+worker
(k0s controller --enable-worker=true
)spec.api.tunneledNetworkingMode
:true
Wait and see konnectivity start error
Expected behavior
tunneledNetworkingMode, when enabled, should exclude controller+worker nodes automatically and include only worker-only nodes
Actual behavior
The
konnectivity-agent
on all controller nodes stuck withCrashLoopBackoff
Screenshots and logs
konnectivity-agent
pod log on controller nodes in this issueWhen
tunneledNetworkingMode: true
(Problem reported in this issue)When
tunneledNetworkingMode: false
(Working good)Additional context
Controller nodes
konnectivity-agent
Pod manifestsWhen
tunneledNetworkingMode: true
(Problem reported in this issue)When
tunneledNetworkingMode: false
(Working good)Note the difference at
--bind-address
and--apiserver-port-mapping=6443:localhost:6443