Closed Kun483 closed 2 months ago
This is resolved by adding port 8472/UDP to the node security group in AWS. Anyone experiencing these kinds of error in the future should look into their firewall rules and make sure the necessary ports are open (see https://docs.cilium.io/en/stable/operations/system_requirements/#firewall-rules).
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSCluster
metadata:
name: ${CLUSTER_NAME}
spec:
network:
vpc:
availabilityZoneUsageLimit: 1
cni:
cniIngressRules:
- description: vxlan-overlay
fromPort: 8472
protocol: udp
toPort: 8472
I used kind cluster as bootstrap cluster and ClusterCtl to launch an AWS cluster with 3 CP nodes and 1 Worker node. I specified providers below in the
.cluster_api/clusterctl.yaml
.Basically following this guide: https://cluster-api.sigs.k8s.io/user/quick-start.html to export variables. Then
Then I applied the files attached. aws_multi-cp_cilium _yamls_sharable.zip Next, I apply yaml file in the workload cluster to test Cilium connectivity.
I observed multiple pods are in
CrashLoopBackOff
state.Environment: CAPA: v1.5.2 CAPI: v1.3.2 Microk8s Boostrap: v0.6.6 Microk8s Control Plane: v0.6.6 Kernel version: 6.2.0-1009-aws Container Runtime: containerd://1.6.28 OS: Ubuntu 22.04.3