Closed pschichtel closed 9 months ago
I'm not entirely sure if this is a k0sctl or k0s issue.
Could you share the k0sctl yaml you used so we can have a look and test the same config
yes:
this is the fixed version:
apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
metadata:
name: k0s-cluster
spec:
hosts:
- openSSH:
address: some-host
role: single
noTaints: true
dataDir: /var/kubernetes/k0s
k0s:
version: v1.29.1+k0s.1
dynamicConfig: false
config:
apiVersion: k0s.k0sproject.io/v1beta1
kind: ClusterConfig
metadata:
name: some-cluster
spec:
api:
k0sApiPort: 9443
port: 6443
extensions:
storage:
create_default_storage_class: false
type: external_storage
installConfig:
users:
etcdUser: etcd
kineUser: kube-apiserver
konnectivityUser: konnectivity-server
kubeAPIserverUser: kube-apiserver
kubeSchedulerUser: kube-scheduler
konnectivity:
adminPort: 8133
agentPort: 8132
network:
provider: calico
calico:
mode: ipip
overlay: Always
clusterDomain: cluster.local
dualStack: {}
kubeProxy:
iptables:
masqueradeAll: true
minSyncPeriod: 0s
syncPeriod: 0s
ipvs:
minSyncPeriod: 0s
syncPeriod: 0s
tcpFinTimeout: 0s
tcpTimeout: 0s
udpTimeout: 0s
metricsBindAddress: 0.0.0.0:10249
mode: iptables
kuberouter:
autoMTU: true
hairpin: Enabled
ipMasq: false
mtu: 0
podCIDR: 10.244.0.0/16
serviceCIDR: 10.96.0.0/12
scheduler: {}
storage:
type: etcd
telemetry:
enabled: true
this is the broken version:
apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
metadata:
name: k0s-cluster
spec:
hosts:
- openSSH:
address: some-host
role: single
noTaints: true
dataDir: /var/kubernetes/k0s
k0s:
version: v1.29.1+k0s.1
dynamicConfig: false
config:
apiVersion: k0s.k0sproject.io/v1beta1
kind: ClusterConfig
metadata:
name: some-cluster
spec:
api:
k0sApiPort: 9443
port: 6443
extensions:
storage:
create_default_storage_class: false
type: external_storage
installConfig:
users:
etcdUser: etcd
kineUser: kube-apiserver
konnectivityUser: konnectivity-server
kubeAPIserverUser: kube-apiserver
kubeSchedulerUser: kube-scheduler
konnectivity:
adminPort: 8133
agentPort: 8132
network:
provider: calico
calico:
mode: ipip
overlay: Always
clusterDomain: cluster.local
dualStack: {}
kubeProxy:
iptables:
masqueradeAll: true
minSyncPeriod: 0s
syncPeriod: 0s
ipvs:
minSyncPeriod: 0s
syncPeriod: 0s
tcpFinTimeout: 0s
tcpTimeout: 0s
udpTimeout: 0s
metricsBindAddress: 0.0.0.0:10249
mode: iptables
kuberouter:
autoMTU: true
hairpin: Enabled
ipMasq: false
mtu: 0
nodeLocalLoadBalancing:
enabled: true
envoyProxy:
apiServerBindPort: 7443
image:
image: docker.io/envoyproxy/envoy-distroless
konnectivityServerBindPort: 7132
type: EnvoyProxy
podCIDR: 10.244.0.0/16
serviceCIDR: 10.96.0.0/12
scheduler: {}
storage:
type: etcd
telemetry:
enabled: true
this only difference is the nodeLocalLoadBalancing
section.
Right. This is actually documented. But in contrast to the conflict with an external API address, this is checked, but not reported as an error. K0s should probably error out in this case.
I'm not entirely sure if this is a k0sctl or k0s issue.
Despite k0s not being fail-fast here, this is kinda also another instance in which k0sproject/k0sctl#475 would have helped.
Before creating an issue, make sure you've checked the following:
Platform
Version
v1.29.1+k0s.1
Sysinfo
`k0s sysinfo`
What happened?
I deployed a k0s single-node using k0sctl based on a config for a multi-node deployment. I adjusted the config but completely forgot, that NLLB doesn't make sense on a single node deployment.
the k0s deployment went through, but the network plugin (calico) didn't become ready and upon inspecting it and some googling I got the kube-proxy, which was trying to access the API server via the NLLB port, even though no NLLB pods have been started. Disabling NLLB and restarting kube-proxy fixed the issue.
So there is an inconsistency here:
Steps to reproduce
Expected behavior
Actual behavior
no NLLB pod is deployed, but kube-proxy is still configured to access the api server via envoy.
Screenshots and logs
No response
Additional context
No response