Closed NiklasRosenstein closed 1 year ago
Yeah, it is a bug in our code, the fix is coming soon today and hopefully will be shipped with one of the next releases.
Hey @mikhail-sakhnov , is there any chance this gets released soon? As far as I can tell this has not made it into the release 3 weeks ago (v1.26.2+k0s.1
) or any other release.
I've been trying to manually replicate this fix to get unblocked. I've added the --auto-mtu=false
option to the kube-router
DaemonSet configuration, as per the linked PR. After restarting all nodes, the Kube-router logs indicate that it's now setting the MTU to 1280 as requested, and all pods come up healthy now.
Unfortunately I'm still facing an issue where I can only read the container logs of for containers in pods scheduled on the one controller+worker
(which is node1
) node that I have (out of 5 nodes total).
The behavior is the same whether I do this from the outside, node1 (controller) or node2 (worker). That behavior seems indicative of another MTU issue. Am I missing a place to fix the MTU value that would be handled by the linked PR?
$ kubectl logs test-on-node1
foo
$ kubectl logs test-on-node2
Error from server: Get "https://100.122.243.86:10250/containerLogs/default/test2/test2": dial tcp 100.122.243.86:10250: i/o timeout
100.122.243.86
is the IP of node2
. I can cURL the same URL without issues (except that I get an Unauthorized
response, which is expected because I'm not giving it any credentials when cURLing).
I'm not sure where this request is being made from; is it from the Kubernetes API Server to proxy the container logs to kubectl
?
From a very first look it seems to work well after I redeployed the Cluster using Calico instead of using kube-router.
spec:
k0s:
version: 1.26.2+k0s.1
dynamicConfig: false
config:
spec:
network:
provider: calico
calico:
mtu: 1280 # Tailscale
withWindowsNodes: true
Before creating an issue, make sure you've checked the following:
Platform
Version
v1.26.1+k0s.0
Sysinfo
```text Machine ID: "630271812dcb188a8174f534ea1fa1f491f6f0838c4f788928b5f58c6c3f230a" (from machine) (pass) Total memory: 125.5 GiB (pass) Disk space available for /var/lib/k0s: 236.7 GiB (pass) Operating system: Linux (pass) Linux kernel release: 5.15.0-60-generic (pass) Max. file descriptors per process: current: 1048576 / max: 1048576 (pass) Executable in path: modprobe: /usr/sbin/modprobe (pass) /proc file system: mounted (0x9fa0) (pass) Control Groups: version 1 (pass) cgroup controller "cpu": available (pass) cgroup controller "cpuacct": available (pass) cgroup controller "cpuset": available (pass) cgroup controller "memory": available (pass) cgroup controller "devices": available (pass) cgroup controller "freezer": available (pass) cgroup controller "pids": available (pass) cgroup controller "hugetlb": available (pass) cgroup controller "blkio": available (pass) CONFIG_CGROUPS: Control Group support: built-in (pass) CONFIG_CGROUP_FREEZER: Freezer cgroup subsystem: built-in (pass) CONFIG_CGROUP_PIDS: PIDs cgroup subsystem: built-in (pass) CONFIG_CGROUP_DEVICE: Device controller for cgroups: built-in (pass) CONFIG_CPUSETS: Cpuset support: built-in (pass) CONFIG_CGROUP_CPUACCT: Simple CPU accounting cgroup subsystem: built-in (pass) CONFIG_MEMCG: Memory Resource Controller for Control Groups: built-in (pass) CONFIG_CGROUP_HUGETLB: HugeTLB Resource Controller for Control Groups: built-in (pass) CONFIG_CGROUP_SCHED: Group CPU scheduler: built-in (pass) CONFIG_FAIR_GROUP_SCHED: Group scheduling for SCHED_OTHER: built-in (pass) CONFIG_CFS_BANDWIDTH: CPU bandwidth provisioning for FAIR_GROUP_SCHED: built-in (pass) CONFIG_BLK_CGROUP: Block IO controller: built-in (pass) CONFIG_NAMESPACES: Namespaces support: built-in (pass) CONFIG_UTS_NS: UTS namespace: built-in (pass) CONFIG_IPC_NS: IPC namespace: built-in (pass) CONFIG_PID_NS: PID namespace: built-in (pass) CONFIG_NET_NS: Network namespace: built-in (pass) CONFIG_NET: Networking support: built-in (pass) CONFIG_INET: TCP/IP networking: built-in (pass) CONFIG_IPV6: The IPv6 protocol: built-in (pass) CONFIG_NETFILTER: Network packet filtering framework (Netfilter): built-in (pass) CONFIG_NETFILTER_ADVANCED: Advanced netfilter configuration: built-in (pass) CONFIG_NETFILTER_XTABLES: Netfilter Xtables support: module (pass) CONFIG_NETFILTER_XT_TARGET_REDIRECT: REDIRECT target support: module (pass) CONFIG_NETFILTER_XT_MATCH_COMMENT: "comment" match support: module (pass) CONFIG_NETFILTER_XT_MARK: nfmark target and match support: module (pass) CONFIG_NETFILTER_XT_SET: set target and match support: module (pass) CONFIG_NETFILTER_XT_TARGET_MASQUERADE: MASQUERADE target support: module (pass) CONFIG_NETFILTER_XT_NAT: "SNAT and DNAT" targets support: module (pass) CONFIG_NETFILTER_XT_MATCH_ADDRTYPE: "addrtype" address type match support: module (pass) CONFIG_NETFILTER_XT_MATCH_CONNTRACK: "conntrack" connection tracking match support: module (pass) CONFIG_NETFILTER_XT_MATCH_MULTIPORT: "multiport" Multiple port match support: module (pass) CONFIG_NETFILTER_XT_MATCH_RECENT: "recent" match support: module (pass) CONFIG_NETFILTER_XT_MATCH_STATISTIC: "statistic" match support: module (pass) CONFIG_NETFILTER_NETLINK: module (pass) CONFIG_NF_CONNTRACK: Netfilter connection tracking support: module (pass) CONFIG_NF_NAT: module (pass) CONFIG_IP_SET: IP set support: module (pass) CONFIG_IP_SET_HASH_IP: hash:ip set support: module (pass) CONFIG_IP_SET_HASH_NET: hash:net set support: module (pass) CONFIG_IP_VS: IP virtual server support: module (pass) CONFIG_IP_VS_NFCT: Netfilter connection tracking: built-in (pass) CONFIG_NF_CONNTRACK_IPV4: IPv4 connetion tracking support (required for NAT): unknown (warning) CONFIG_NF_REJECT_IPV4: IPv4 packet rejection: module (pass) CONFIG_NF_NAT_IPV4: IPv4 NAT: unknown (warning) CONFIG_IP_NF_IPTABLES: IP tables support: module (pass) CONFIG_IP_NF_FILTER: Packet filtering: module (pass) CONFIG_IP_NF_TARGET_REJECT: REJECT target support: module (pass) CONFIG_IP_NF_NAT: iptables NAT support: module (pass) CONFIG_IP_NF_MANGLE: Packet mangling: module (pass) CONFIG_NF_DEFRAG_IPV4: module (pass) CONFIG_NF_CONNTRACK_IPV6: IPv6 connetion tracking support (required for NAT): unknown (warning) CONFIG_NF_NAT_IPV6: IPv6 NAT: unknown (warning) CONFIG_IP6_NF_IPTABLES: IP6 tables support: module (pass) CONFIG_IP6_NF_FILTER: Packet filtering: module (pass) CONFIG_IP6_NF_MANGLE: Packet mangling: module (pass) CONFIG_IP6_NF_NAT: ip6tables NAT support: module (pass) CONFIG_NF_DEFRAG_IPV6: module (pass) CONFIG_BRIDGE: 802.1d Ethernet Bridging: module (pass) CONFIG_LLC: module (pass) CONFIG_STP: module (pass) CONFIG_EXT4_FS: The Extended 4 (ext4) filesystem: built-in (pass) CONFIG_PROC_FS: /proc file system support: built-in (pass) ```k0s sysinfo
What happened?
I'm trying to deploy an on-prem Kubernetes cluster with k0s. The host is connected to a Tailscale network. Pods on the cluster must be able to communicate with the rest of the Tailscale VPN.
This is currently blocked because I can't find a way to reduce the MTU down to 1280 from 1500 what
kube-router
auto detects.I've updated the
kuberouter
settings accordingly in the./k0sctl.yaml
underspec.k0s.config.spec.network.kuberouter
:After
./k0sctl apply
, thekube-router-cfg
config contains what I would expect:Yet,
kube-router
still auto detects the MTU:(log line source)
I'm thinking that this may be a bug in
k0s
on how it configureskube-router
? But I'm a bit out of my depth here.Steps to reproduce
No response
Expected behavior
kube-router
is configured to use the MTU set in the k0s config.Actual behavior
kube-router
still auto detects the MTU.Screenshots and logs
No response
Additional context
No response