Using new eBPF programs attach API doesn't break cilium network anymore.
Tested on EKS with eksctl on clusters version 1.29 and 1.31. In case of 1.29 ebpf capture doesn't work because of https://github.com/kubeshark/tracer/issues/108, but after the fix cluster itself continues to work without issues
* install kubeshark and test (without `-disable-ebpf` option in both sniffer and tracer)
links:
https://isovalent.com/blog/post/eks-byocni-cilium/
https://medium.com/@amitmavgupta/cilium-installing-cilium-in-eks-with-no-kube-proxy-86f54a56c360
resolves https://github.com/kubeshark/worker/issues/263
Using new eBPF programs attach API doesn't break cilium network anymore.
Tested on EKS with
eksctl
on clusters version 1.29 and 1.31. In case of 1.29 ebpf capture doesn't work because of https://github.com/kubeshark/tracer/issues/108, but after the fix cluster itself continues to work without issuesTest scenario:
cluster.yaml
:metadata: name:
region:
version:
iam: withOIDC: true
addonsConfig: disableDefaultAddons: true addons:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-844dbb9f6f-hl2w7 0/1 Pending 0 2m34s
kube-system coredns-844dbb9f6f-wmv7m 0/1 Pending 0 2m34s
helm install cilium cilium/cilium --version 1.16.3 \ --namespace kube-system \ --set kubeProxyReplacement=true \ --set k8sServiceHost=$(aws eks describe-cluster --name --region --query "cluster.endpoint" --output text | sed 's/https:\/\///') \
--set k8sServicePort=443 \
--set ipam.mode=cluster-pool \
--set enableIPv4Masquerade=true \
--set enableIPv6Masquerade=true \
--set nodeinit.enabled=true \
--set bpf.masquerade=true \
--set enableXDP=true \
--set enableHubble=true \
--set hubble.relay.enabled=true \
--set hubble.ui.enabled=true
apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig
metadata: name:
region:
managedNodeGroups:
eksctl create nodegroup -f ./nodegroup.yaml