Closed axkng closed 7 months ago
In my opinion, it would be useful to share at least the network policy manifest.
That is correct. I used this manifest to setup a quick test environment:
---
apiVersion: v1
kind: Namespace
metadata:
name: test
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: test
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: test
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: default-ingress-deny
namespace: test
spec:
podSelector:
matchLabels:
app: nginx
ingress: []
It does not matter if I try to connect to the pod directly or via the service. I am always able to connect.
Your Network Policy seems to be wrong. Please try this one.
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: test
spec:
podSelector:
matchLabels:
app: nginx
policyTypes:
- Ingress
Hi @vladyslav-mahilevskyi , I updated the policy, but I am still able to connect.
At which logs would I look to see if Calico even evaluates the config? To me it seems like the policy is just ignored.
@Furragen I would look at iptables
on the host running the nginx pod to see if there are any rules programmed starting with cali-tw
.
You can also check the calico/node pod on that host for warning / error level logs.
Hi @caseydavenport ,
thanks for the suggestion.
I looked at the node a saw a lot of rules starting with cali-tw
.
The logs of the calico-pod on the node contain no errors. But, I spotted this for example:
2022-03-24 06:54:37.956 [INFO][52] felix/int_dataplane.go 1520: Received *proto.WorkloadEndpointUpdate update from calculation graph msg=id:<orchestrator_id:"k8s" workload_id:"test/nginx-deployment-7cdf68cd58-vkz5z" endpoint_id:"eth0" > endpoint:<state:"active" name:"enifed5b5a59d8" profile_ids:"kns.test" profile_ids:"ksa.test.default" ipv6_nets:"xxxxxxx/128" tiers:<name:"default" ingress_policies:"test/knp.default.default-ingress-egress-deny"
This looks to me like a hint that calico at least knows about the policy.
Calico version 3.22.1 Orchestrator version (e.g. kubernetes, mesos, rkt): Kubernetes 1.21.9 Operating System and version: Bottlerocket OS 1.6.2 (on ARM)
I was reminded of this issue but if you're not seeing the ipset error logs then that OS is probably using a compatible ipset version.
Are you able to share the relevant cali-tw
iptables rules? (With any redactions as needed.)
Hi, sorry for coming back to this issue so late.
Here are the (probably relevant) iptables-rules from the node on which the pod runs that should not be reachable:
Chain cali-FORWARD (1 references)
target prot opt source destination
MARK all -- anywhere anywhere /* cali:vjrMJCRpqwy5oRoX */ MARK and 0xfff1ffff
cali-from-hep-forward all -- anywhere anywhere /* cali:A_sPAO0mcxbT9mOV */ mark match 0x0/0x10000
cali-from-wl-dispatch all -- anywhere anywhere /* cali:zSuvpsKZMnLBOFg- */
cali-to-wl-dispatch all -- anywhere anywhere /* cali:ZD6sBvTDTQIcyWe3 */
cali-to-hep-forward all -- anywhere anywhere /* cali:VyilXNpEbr584kvI */
cali-cidr-block all -- anywhere anywhere /* cali:KHV1shAhs1mIlf-Z */
Chain cali-INPUT (1 references)
target prot opt source destination
cali-wl-to-host all -- anywhere anywhere [goto] /* cali:qHhkrYTAgTaLdfV9 */
ACCEPT all -- anywhere anywhere /* cali:hxZdwQ9XtFGqp4eQ */ mark match 0x10000/0x10000
MARK all -- anywhere anywhere /* cali:iBWm42IlrbWEfWiE */ MARK and 0xfff0ffff
cali-from-host-endpoint all -- anywhere anywhere /* cali:QSIaGKBuw3MaQJ0R */
ACCEPT all -- anywhere anywhere /* cali:3sTF5X1tSrD171R_ */ /* Host endpoint policy accepted packet. */ mark match 0x10000/0x10000
Chain cali-OUTPUT (1 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere /* cali:Mq1_rAdXXH3YkrzW */ mark match 0x10000/0x10000
RETURN all -- anywhere anywhere /* cali:hq_0Rh96d-5PSih6 */
MARK all -- anywhere anywhere /* cali:evsmyFWMGY7CMH5S */ MARK and 0xfff0ffff
cali-to-host-endpoint all -- anywhere anywhere /* cali:-DC5XMSa36Rd6y6U */ ! ctstate DNAT
ACCEPT all -- anywhere anywhere /* cali:93m1J6huuQgBmflk */ /* Host endpoint policy accepted packet. */ mark match 0x10000/0x10000
Chain cali-cidr-block (1 references)
target prot opt source destination
Chain cali-from-hep-forward (1 references)
target prot opt source destination
Chain cali-from-host-endpoint (1 references)
target prot opt source destination
Chain cali-from-wl-dispatch (2 references)
target prot opt source destination
cali-from-wl-dispatch-0 all -- anywhere anywhere [goto] /* cali:5bfEcqAkIViTouw0 */
cali-from-wl-dispatch-1 all -- anywhere anywhere [goto] /* cali:VEiCWrhOzi59wZ35 */
cali-from-wl-dispatch-2 all -- anywhere anywhere [goto] /* cali:0pjJjdpVZy2MnyvT */
cali-from-wl-dispatch-3 all -- anywhere anywhere [goto] /* cali:1QVVppFtba-U3aYF */
cali-from-wl-dispatch-4 all -- anywhere anywhere [goto] /* cali:olLY3qiqBDb840dn */
cali-from-wl-dispatch-6 all -- anywhere anywhere [goto] /* cali:y90ovFAF9P_B6eMz */
cali-fw-eni738c9293371 all -- anywhere anywhere [goto] /* cali:FScBYQr5SOF7tBTU */
cali-from-wl-dispatch-8 all -- anywhere anywhere [goto] /* cali:NuJ5lk3eLdkVUPGI */
cali-from-wl-dispatch-a all -- anywhere anywhere [goto] /* cali:k2uqyiEJhBRDVS27 */
cali-from-wl-dispatch-b all -- anywhere anywhere [goto] /* cali:OvtnEBzpu0tPAAqZ */
cali-from-wl-dispatch-c all -- anywhere anywhere [goto] /* cali:Ofnrd3BVWs3wskvQ */
cali-from-wl-dispatch-d all -- anywhere anywhere [goto] /* cali:x0ORgHrbl7ny2VFO */
cali-from-wl-dispatch-e all -- anywhere anywhere [goto] /* cali:PegMwWuKOH23FUlp */
DROP all -- anywhere anywhere /* cali:hDMxmBR2skFOXmLZ */ /* Unknown interface */
Chain cali-from-wl-dispatch-0 (1 references)
target prot opt source destination
cali-fw-eni03531b95e5c all -- anywhere anywhere [goto] /* cali:Q8mPC5awYpdb_XQs */
cali-fw-eni0716ba4fdd4 all -- anywhere anywhere [goto] /* cali:aDLQDWP7OCrjGT8y */
cali-fw-eni0c6784024a8 all -- anywhere anywhere [goto] /* cali:k-ND-fcyJLmibq0a */
DROP all -- anywhere anywhere /* cali:qrrmjeOECjB_TcjE */ /* Unknown interface */
Chain cali-from-wl-dispatch-1 (1 references)
target prot opt source destination
cali-fw-eni1c44d3ec47d all -- anywhere anywhere [goto] /* cali:DmbyfXLi5_zIPKjt */
cali-fw-eni1ed40b160d7 all -- anywhere anywhere [goto] /* cali:lwa2wUIsywgGe9n6 */
DROP all -- anywhere anywhere /* cali:wXCsGW5RYbbd8evC */ /* Unknown interface */
Chain cali-from-wl-dispatch-2 (1 references)
target prot opt source destination
cali-fw-eni21a99200642 all -- anywhere anywhere [goto] /* cali:x3PVFqz8ecb4-i4E */
cali-fw-eni22595fc2f5f all -- anywhere anywhere [goto] /* cali:i7w0I9tirDEyK4Tk */
cali-fw-eni2472895fa7d all -- anywhere anywhere [goto] /* cali:viWl20CjWDLBcZko */
DROP all -- anywhere anywhere /* cali:3bDacLygqELkdLXU */ /* Unknown interface */
Chain cali-from-wl-dispatch-3 (1 references)
target prot opt source destination
cali-fw-eni35374971dd7 all -- anywhere anywhere [goto] /* cali:qh8wNgQeMYsNfNf2 */
cali-fw-eni3e26bf888ec all -- anywhere anywhere [goto] /* cali:NNbz9S6ob_XEomIc */
cali-fw-eni3f83a050a8e all -- anywhere anywhere [goto] /* cali:WNjOfUnpvfdLeOb9 */
DROP all -- anywhere anywhere /* cali:wH5a6evqabd18zJz */ /* Unknown interface */
Chain cali-from-wl-dispatch-4 (1 references)
target prot opt source destination
cali-fw-eni47b2183eefd all -- anywhere anywhere [goto] /* cali:hj92MT1MTjvxE-D7 */
cali-fw-eni4a659be2fba all -- anywhere anywhere [goto] /* cali:wis6vY8434mpgFM9 */
DROP all -- anywhere anywhere /* cali:w8RPTQGqucNov5qm */ /* Unknown interface */
Chain cali-from-wl-dispatch-6 (1 references)
target prot opt source destination
cali-fw-eni63df7f208e6 all -- anywhere anywhere [goto] /* cali:0bfyjbBEB_az2t3J */
cali-fw-eni67b96ed3bad all -- anywhere anywhere [goto] /* cali:zYJ1kZJWgPIpBTUS */
DROP all -- anywhere anywhere /* cali:d29kjisIFO9Yqg5b */ /* Unknown interface */
Chain cali-from-wl-dispatch-8 (1 references)
target prot opt source destination
cali-fw-eni81a17f173bc all -- anywhere anywhere [goto] /* cali:K9TacIT8YzVlK_oU */
cali-fw-eni8638d1b7efd all -- anywhere anywhere [goto] /* cali:OxGShVRC882Xi478 */
cali-fw-eni88f62525f10 all -- anywhere anywhere [goto] /* cali:zjKcgJsU_CbHFGvA */
cali-fw-eni8de7ddc7ed1 all -- anywhere anywhere [goto] /* cali:Pnzay4mr2vickXi5 */
DROP all -- anywhere anywhere /* cali:3wVlMie0tUUKrXUu */ /* Unknown interface */
Chain cali-from-wl-dispatch-a (1 references)
target prot opt source destination
cali-fw-enia0ec04a7ddc all -- anywhere anywhere [goto] /* cali:xVp_fydlQxLqH7RD */
cali-fw-enia8383c92c15 all -- anywhere anywhere [goto] /* cali:fsOsHwlxoW6EQIEt */
DROP all -- anywhere anywhere /* cali:14KvsWcxQRO26f9s */ /* Unknown interface */
Chain cali-from-wl-dispatch-b (1 references)
target prot opt source destination
cali-fw-enib07743e04e4 all -- anywhere anywhere [goto] /* cali:78NixuWDG7tXPIrt */
cali-fw-enib8b8e79f27b all -- anywhere anywhere [goto] /* cali:Fu2xTfgLnNeSQ7g0 */
cali-fw-enibae4261efb6 all -- anywhere anywhere [goto] /* cali:kq-noQgKRuPzOjrh */
cali-fw-enibb9994cc117 all -- anywhere anywhere [goto] /* cali:Hu_0WCv3qFKb_TI1 */
DROP all -- anywhere anywhere /* cali:Ps59kBG4evmvmvJx */ /* Unknown interface */
Chain cali-from-wl-dispatch-c (1 references)
target prot opt source destination
cali-fw-enic5541176e77 all -- anywhere anywhere [goto] /* cali:ap5cN2dBabmOdSrO */
cali-fw-enic951d8cb8d3 all -- anywhere anywhere [goto] /* cali:N2R_IWxVGykxddK5 */
cali-fw-enicacbc4bcc26 all -- anywhere anywhere [goto] /* cali:JDOQ5hoCXpMWsdZ- */
DROP all -- anywhere anywhere /* cali:ILb5ejH_Ir81vRJ9 */ /* Unknown interface */
Chain cali-from-wl-dispatch-d (1 references)
target prot opt source destination
cali-fw-enid328ab451ad all -- anywhere anywhere [goto] /* cali:93heylguJk0NCaWF */
cali-fw-enid35349572f4 all -- anywhere anywhere [goto] /* cali:ubFS5EjqKXpKZc4a */
cali-fw-enid42e7a65293 all -- anywhere anywhere [goto] /* cali:G4mNdwm6OZAJN8R0 */
cali-fw-enide1990db225 all -- anywhere anywhere [goto] /* cali:gPzWFHS_A9lO2qPw */
DROP all -- anywhere anywhere /* cali:PV1btbYH5oz04nnm */ /* Unknown interface */
Chain cali-from-wl-dispatch-e (1 references)
target prot opt source destination
cali-fw-enie1a26225264 all -- anywhere anywhere [goto] /* cali:834Q4P6lpl2RSqiW */
cali-fw-enie2ff26c14db all -- anywhere anywhere [goto] /* cali:FP4jWiF0u8kWruuE */
cali-fw-enie4002b0da75 all -- anywhere anywhere [goto] /* cali:nPFi5c8w0DZt7zDZ */
cali-fw-eniea63e57b06b all -- anywhere anywhere [goto] /* cali:wCVpxCHiPghtlvJ2 */
cali-fw-enieb9ddca3403 all -- anywhere anywhere [goto] /* cali:ZN5hDTWsEs1IuDQh */
DROP all -- anywhere anywhere /* cali:CQPoqlJxa-gkAvyI */ /* Unknown interface */
Chain cali-fw-eni0c6784024a8 (1 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere /* cali:gwOdTMEgpBNmEIAC */ ctstate RELATED,ESTABLISHED
DROP all -- anywhere anywhere /* cali:5pevQBDNVhe7ULzV */ ctstate INVALID
MARK all -- anywhere anywhere /* cali:lfDrkm99QmTKW4i2 */ MARK and 0xfffeffff
DROP udp -- anywhere anywhere /* cali:E3IWdSCA_K2vV3WF */ /* Drop VXLAN encapped packets originating in workloads */ multiport dports 4789
DROP ipv4 -- anywhere anywhere /* cali:rLepXDGSwAgHM_ML */ /* Drop IPinIP encapped packets originating in workloads */
cali-pro-_kJqfZpgUe7r2t4A-14 all -- anywhere anywhere /* cali:RJoyHhT9nGj3DGRp */
RETURN all -- anywhere anywhere /* cali:M2_S9umcqbuGLCZZ */ /* Return if profile accepted */ mark match 0x10000/0x10000
cali-pro-_4yi5_iSUAwsU8zMHTk all -- anywhere anywhere /* cali:IFfxtu7Rbv36I0h3 */
RETURN all -- anywhere anywhere /* cali:Pi61Yd7-OJbbZpR4 */ /* Return if profile accepted */ mark match 0x10000/0x10000
DROP all -- anywhere anywhere /* cali:ElEAukQHaypVGAKa */ /* Drop if no profiles matched */
Chain cali-fw-eni3f83a050a8e (1 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere /* cali:PmhUabcvSZQepvlt */ ctstate RELATED,ESTABLISHED
DROP all -- anywhere anywhere /* cali:KyLBFmc4mOu_O9x3 */ ctstate INVALID
MARK all -- anywhere anywhere /* cali:u6lZ_0MpvjH1G8FA */ MARK and 0xfffeffff
DROP udp -- anywhere anywhere /* cali:bX59bkhYl4zWfvKI */ /* Drop VXLAN encapped packets originating in workloads */ multiport dports 4789
DROP ipv4 -- anywhere anywhere /* cali:V3LydqLqrpEC4c3x */ /* Drop IPinIP encapped packets originating in workloads */
cali-pro-_kJqfZpgUe7r2t4A-14 all -- anywhere anywhere /* cali:TJbhU_7W-JrVKadi */
RETURN all -- anywhere anywhere /* cali:I0-UUHKK6rsvNMEk */ /* Return if profile accepted */ mark match 0x10000/0x10000
cali-pro-_4yi5_iSUAwsU8zMHTk all -- anywhere anywhere /* cali:DxUxSqjUaezsLLv8 */
RETURN all -- anywhere anywhere /* cali:3S494S00LUfaIFt8 */ /* Return if profile accepted */ mark match 0x10000/0x10000
DROP all -- anywhere anywhere /* cali:sOd12Uv5tguuNu1O */ /* Drop if no profiles matched */
Chain cali-fw-eni88f62525f10 (1 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere /* cali:ayibvsIJ30fx1lLL */ ctstate RELATED,ESTABLISHED
DROP all -- anywhere anywhere /* cali:TTuA1XifRAJaew01 */ ctstate INVALID
MARK all -- anywhere anywhere /* cali:ZAsfovmWHteI35i1 */ MARK and 0xfffeffff
DROP udp -- anywhere anywhere /* cali:lRF-EpMARa3QhCKZ */ /* Drop VXLAN encapped packets originating in workloads */ multiport dports 4789
DROP ipv4 -- anywhere anywhere /* cali:IuNz7Koa81IiRHtT */ /* Drop IPinIP encapped packets originating in workloads */
cali-pro-_HaddP7gqEbX9JcSnYM all -- anywhere anywhere /* cali:EjTGirbYAFre3oIK */
RETURN all -- anywhere anywhere /* cali:NtInNRXYUAWiTz1Z */ /* Return if profile accepted */ mark match 0x10000/0x10000
cali-pro-_Rnp_ESaIC8ZkKN1abf all -- anywhere anywhere /* cali:Uu7gC9GHP2ogFeW9 */
RETURN all -- anywhere anywhere /* cali:_hsVi_MO8SH5_VJr */ /* Return if profile accepted */ mark match 0x10000/0x10000
DROP all -- anywhere anywhere /* cali:xPVxeY-JVVaBy3c4 */ /* Drop if no profiles matched */
Chain cali-fw-enibb9994cc117 (1 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere /* cali:h1FnEEVwvh22Da4P */ ctstate RELATED,ESTABLISHED
DROP all -- anywhere anywhere /* cali:KO09buehCHq-9w6L */ ctstate INVALID
MARK all -- anywhere anywhere /* cali:QelLGL7NJw__L9Xz */ MARK and 0xfffeffff
DROP udp -- anywhere anywhere /* cali:k7d5CAktQnQm-0Ls */ /* Drop VXLAN encapped packets originating in workloads */ multiport dports 4789
DROP ipv4 -- anywhere anywhere /* cali:n55A-G71rFN6Ralk */ /* Drop IPinIP encapped packets originating in workloads */
cali-pro-kns.test all -- anywhere anywhere /* cali:8IEgNCucptZ_Djm3 */
RETURN all -- anywhere anywhere /* cali:wba8lbJM9dWxaw42 */ /* Return if profile accepted */ mark match 0x10000/0x10000
cali-pro-ksa.test.default all -- anywhere anywhere /* cali:8gS-XVRcagFaMf2D */
RETURN all -- anywhere anywhere /* cali:EL252kyqwgUHUXkQ */ /* Return if profile accepted */ mark match 0x10000/0x10000
DROP all -- anywhere anywhere /* cali:U4DrGWbl1wA7MzNu */ /* Drop if no profiles matched */
Chain cali-fw-enicacbc4bcc26 (1 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere /* cali:KQCim2nX-RhDrQLQ */ ctstate RELATED,ESTABLISHED
DROP all -- anywhere anywhere /* cali:xluqybXG1OyEVE_i */ ctstate INVALID
MARK all -- anywhere anywhere /* cali:BRG4AAYeRS2jpUYH */ MARK and 0xfffeffff
DROP udp -- anywhere anywhere /* cali:sgLEcNHlKJP69Xoo */ /* Drop VXLAN encapped packets originating in workloads */ multiport dports 4789
DROP ipv4 -- anywhere anywhere /* cali:9SxFgNfzncqnO-_8 */ /* Drop IPinIP encapped packets originating in workloads */
cali-pro-_HaddP7gqEbX9JcSnYM all -- anywhere anywhere /* cali:z5On0m2Jem27vNc_ */
RETURN all -- anywhere anywhere /* cali:vb7ifHvDDJT_5CzW */ /* Return if profile accepted */ mark match 0x10000/0x10000
cali-pro-_Rnp_ESaIC8ZkKN1abf all -- anywhere anywhere /* cali:ECZ1LSe8IDRSnnfI */
RETURN all -- anywhere anywhere /* cali:g9oaFyjxRy4H9qLl */ /* Return if profile accepted */ mark match 0x10000/0x10000
DROP all -- anywhere anywhere /* cali:C-kor17xcVcxOqI0 */ /* Drop if no profiles matched */
Chain cali-fw-eniea63e57b06b (1 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere /* cali:HTi6joyPB9iU2CBL */ ctstate RELATED,ESTABLISHED
DROP all -- anywhere anywhere /* cali:5Giy3S6P19_Ye2cj */ ctstate INVALID
MARK all -- anywhere anywhere /* cali:Weu0Xyr7X6P9bvmH */ MARK and 0xfffeffff
DROP udp -- anywhere anywhere /* cali:StLYLK_KbCkQ4qd5 */ /* Drop VXLAN encapped packets originating in workloads */ multiport dports 4789
DROP ipv4 -- anywhere anywhere /* cali:zjBdJL4-98Zzg5jz */ /* Drop IPinIP encapped packets originating in workloads */
cali-pro-_y7B7CuWD8IH_K3cXEw all -- anywhere anywhere /* cali:Q2XNGtGXT-T_otOQ */
RETURN all -- anywhere anywhere /* cali:BPoQJEAnR3x81FYV */ /* Return if profile accepted */ mark match 0x10000/0x10000
cali-pro-_13i7kooWeuaj42zOEt all -- anywhere anywhere /* cali:fDEmKE1sNuYDIHY7 */
RETURN all -- anywhere anywhere /* cali:9RvMIuzlMxppDB4R */ /* Return if profile accepted */ mark match 0x10000/0x10000
DROP all -- anywhere anywhere /* cali:6BbGfy_ujoFxzQLV */ /* Drop if no profiles matched */
Chain cali-pi-_3CJ_GmvE9pcCktVJ2ep (2 references)
target prot opt source destination
MARK tcp -- anywhere anywhere /* cali:I0yo8ky1YADcMXRf */ /* Policy calico-apiserver/knp.default.allow-apiserver ingress */ multiport dports spss MARK or 0x10000
RETURN all -- anywhere anywhere /* cali:vcoXnuptt0fs1M8q */ mark match 0x10000/0x10000
Chain cali-pi-_DYtwCKMZFWyxK2RhACP (1 references)
target prot opt source destination
all -- anywhere anywhere /* cali:j53w2cHoSsmy_etr */ /* Policy test/knp.default.default-ingress-egress-deny ingress */
Chain cali-pri-_4yi5_iSUAwsU8zMHTk (2 references)
target prot opt source destination
all -- anywhere anywhere /* cali:ZYnaZZFwsSjfXO4C */ /* Profile ksa.calico-apiserver.calico-apiserver ingress */
Chain cali-pri-_kJqfZpgUe7r2t4A-14 (2 references)
target prot opt source destination
MARK all -- anywhere anywhere /* cali:IQx0SzlDGn6BPv0A */ /* Profile kns.calico-apiserver ingress */ MARK or 0x10000
RETURN all -- anywhere anywhere /* cali:hnAzdaRiFbG1YPHK */ mark match 0x10000/0x10000
Chain cali-pri-_nzzjLvInId1gPHmQz_ (1 references)
target prot opt source destination
all -- anywhere anywhere /* cali:UQoEf2WCdU0bPTCb */ /* Profile ksa.calico-system.calico-kube-controllers ingress */
Chain cali-pri-kns.test (1 references)
target prot opt source destination
MARK all -- anywhere anywhere /* cali:magKEnf4SgmtwjGo */ /* Profile kns.test ingress */ MARK or 0x10000
RETURN all -- anywhere anywhere /* cali:_EBRkbJB7tipBg3t */ mark match 0x10000/0x10000
Chain cali-pri-ksa.test.default (1 references)
target prot opt source destination
all -- anywhere anywhere /* cali:c8EIozTpA__jeS6E */ /* Profile ksa.test.default ingress */
Chain cali-pro-_kJqfZpgUe7r2t4A-14 (2 references)
target prot opt source destination
MARK all -- anywhere anywhere /* cali:_cFTxC141wwWRzyZ */ /* Profile kns.calico-apiserver egress */ MARK or 0x10000
RETURN all -- anywhere anywhere /* cali:dvDa77rVzpVqK7ZF */ mark match 0x10000/0x10000
Chain cali-pro-kns.calico-system (1 references)
target prot opt source destination
MARK all -- anywhere anywhere /* cali:gWxJzCZXxl31NR0P */ /* Profile kns.calico-system egress */ MARK or 0x10000
RETURN all -- anywhere anywhere /* cali:rHIqpX_kWRu4q0wP */ mark match 0x10000/0x10000
Chain cali-pro-kns.test (1 references)
target prot opt source destination
MARK all -- anywhere anywhere /* cali:-Lxi6YFd9Xn8p1kB */ /* Profile kns.test egress */ MARK or 0x10000
RETURN all -- anywhere anywhere /* cali:WTX1T1CNNwhbQxz7 */ mark match 0x10000/0x10000
Chain cali-pro-ksa.test.default (1 references)
target prot opt source destination
all -- anywhere anywhere /* cali:zYQ4AzCRMhnX9sJV */ /* Profile ksa.test.default egress */
Chain cali-to-hep-forward (1 references)
target prot opt source destination
Chain cali-to-host-endpoint (1 references)
target prot opt source destination
Chain cali-to-wl-dispatch (1 references)
target prot opt source destination
cali-to-wl-dispatch-0 all -- anywhere anywhere [goto] /* cali:pwaSDJDieqzJ1qOF */
cali-to-wl-dispatch-1 all -- anywhere anywhere [goto] /* cali:0p7lBqRJYtof0Vq8 */
cali-to-wl-dispatch-2 all -- anywhere anywhere [goto] /* cali:Z6bdrXck8ac9rKV9 */
cali-to-wl-dispatch-3 all -- anywhere anywhere [goto] /* cali:_ntzZoTfFtifcxId */
cali-to-wl-dispatch-4 all -- anywhere anywhere [goto] /* cali:hNs9b5C_wqy87Uzj */
cali-to-wl-dispatch-6 all -- anywhere anywhere [goto] /* cali:Yd3oJVghiZbbu0h2 */
cali-tw-eni738c9293371 all -- anywhere anywhere [goto] /* cali:ZE9lVdf3buhZnHxr */
cali-to-wl-dispatch-8 all -- anywhere anywhere [goto] /* cali:qUC2nVkkTcUREbPi */
cali-to-wl-dispatch-a all -- anywhere anywhere [goto] /* cali:GhFps2KSLNk0XjiI */
cali-to-wl-dispatch-b all -- anywhere anywhere [goto] /* cali:PvNGdhrOMH43D4Se */
cali-to-wl-dispatch-c all -- anywhere anywhere [goto] /* cali:FBoh2D1h0Ep2FNU_ */
cali-to-wl-dispatch-d all -- anywhere anywhere [goto] /* cali:DfVHpGwbV7ku26xm */
cali-to-wl-dispatch-e all -- anywhere anywhere [goto] /* cali:tjNVVpaW8qw7DYB- */
DROP all -- anywhere anywhere /* cali:JDMg3OGJeQzN5WAv */ /* Unknown interface */
Chain cali-to-wl-dispatch-0 (1 references)
target prot opt source destination
cali-tw-eni03531b95e5c all -- anywhere anywhere [goto] /* cali:Up1MiSXyAheAwZhM */
cali-tw-eni0716ba4fdd4 all -- anywhere anywhere [goto] /* cali:Q3hr40iHRQCgyr60 */
cali-tw-eni0c6784024a8 all -- anywhere anywhere [goto] /* cali:oaALnNk27HOL8DPB */
DROP all -- anywhere anywhere /* cali:kvrrmBEOp0Y1ayAl */ /* Unknown interface */
Chain cali-to-wl-dispatch-1 (1 references)
target prot opt source destination
cali-tw-eni1c44d3ec47d all -- anywhere anywhere [goto] /* cali:6LEe9iUuSoEYENgQ */
cali-tw-eni1ed40b160d7 all -- anywhere anywhere [goto] /* cali:PdrpEtCpccWp1b39 */
DROP all -- anywhere anywhere /* cali:KLv1mX2eAg08tTCL */ /* Unknown interface */
Chain cali-to-wl-dispatch-2 (1 references)
target prot opt source destination
cali-tw-eni21a99200642 all -- anywhere anywhere [goto] /* cali:gbHnUu62lEY-9N-a */
cali-tw-eni22595fc2f5f all -- anywhere anywhere [goto] /* cali:ynloocQn5K-l1Ero */
cali-tw-eni2472895fa7d all -- anywhere anywhere [goto] /* cali:nevoaoPa7OfjHlJX */
DROP all -- anywhere anywhere /* cali:GSw0yj9yjwdTQHiJ */ /* Unknown interface */
Chain cali-to-wl-dispatch-3 (1 references)
target prot opt source destination
cali-tw-eni35374971dd7 all -- anywhere anywhere [goto] /* cali:cEYvVboRFpD9-e63 */
cali-tw-eni3e26bf888ec all -- anywhere anywhere [goto] /* cali:ApOjZswF_-QPQ5fN */
cali-tw-eni3f83a050a8e all -- anywhere anywhere [goto] /* cali:8N4nJmE7GJpgO75q */
DROP all -- anywhere anywhere /* cali:loFgQjcOtrdMU-xQ */ /* Unknown interface */
Chain cali-to-wl-dispatch-4 (1 references)
target prot opt source destination
cali-tw-eni47b2183eefd all -- anywhere anywhere [goto] /* cali:-ACqsPr694OrNpMo */
cali-tw-eni4a659be2fba all -- anywhere anywhere [goto] /* cali:aFo3uQ5f6iOVxMJc */
DROP all -- anywhere anywhere /* cali:GVa80M_y1es05zR8 */ /* Unknown interface */
Chain cali-to-wl-dispatch-6 (1 references)
target prot opt source destination
cali-tw-eni63df7f208e6 all -- anywhere anywhere [goto] /* cali:tvF6AS2yVPDpVKOQ */
cali-tw-eni67b96ed3bad all -- anywhere anywhere [goto] /* cali:LIRzdQ4HTJ0nDKKP */
DROP all -- anywhere anywhere /* cali:j5qcp2QNvMKXP5po */ /* Unknown interface */
Chain cali-to-wl-dispatch-8 (1 references)
target prot opt source destination
cali-tw-eni81a17f173bc all -- anywhere anywhere [goto] /* cali:2fbeFFsSRvy90V-P */
cali-tw-eni8638d1b7efd all -- anywhere anywhere [goto] /* cali:vZ4I09emooys1Dht */
cali-tw-eni88f62525f10 all -- anywhere anywhere [goto] /* cali:owK_uS4HQzDuPPM9 */
cali-tw-eni8de7ddc7ed1 all -- anywhere anywhere [goto] /* cali:iOW1hAIWIttKFMwk */
DROP all -- anywhere anywhere /* cali:dnWjOG5v6OMJWV0P */ /* Unknown interface */
Chain cali-to-wl-dispatch-a (1 references)
target prot opt source destination
cali-tw-enia0ec04a7ddc all -- anywhere anywhere [goto] /* cali:s-VT0J4SAGEbT8tv */
cali-tw-enia8383c92c15 all -- anywhere anywhere [goto] /* cali:mZ5eLgqZvb30JChw */
DROP all -- anywhere anywhere /* cali:pkp_NOMELSMjoicW */ /* Unknown interface */
Chain cali-to-wl-dispatch-b (1 references)
target prot opt source destination
cali-tw-enib07743e04e4 all -- anywhere anywhere [goto] /* cali:oIb5ILiPNaUR6UUX */
cali-tw-enib8b8e79f27b all -- anywhere anywhere [goto] /* cali:yzCdZKOftoJ05gf8 */
cali-tw-enibae4261efb6 all -- anywhere anywhere [goto] /* cali:3EFrOIGchZoNFHZR */
cali-tw-enibb9994cc117 all -- anywhere anywhere [goto] /* cali:O7pmcT5mSlLHlGkf */
DROP all -- anywhere anywhere /* cali:i6Qyp9ggt06N_9hE */ /* Unknown interface */
Chain cali-to-wl-dispatch-c (1 references)
target prot opt source destination
cali-tw-enic5541176e77 all -- anywhere anywhere [goto] /* cali:iw_B0RMrSaMe6BBC */
cali-tw-enic951d8cb8d3 all -- anywhere anywhere [goto] /* cali:BuG-LeLiZjPtTA67 */
cali-tw-enicacbc4bcc26 all -- anywhere anywhere [goto] /* cali:us26nlWUjPTallm3 */
DROP all -- anywhere anywhere /* cali:Ba4bPGsBmUVrvhVM */ /* Unknown interface */
Chain cali-to-wl-dispatch-d (1 references)
target prot opt source destination
cali-tw-enid328ab451ad all -- anywhere anywhere [goto] /* cali:zb1sQSByi-IfAqf_ */
cali-tw-enid35349572f4 all -- anywhere anywhere [goto] /* cali:lgcOONLqNhGMkWWb */
cali-tw-enid42e7a65293 all -- anywhere anywhere [goto] /* cali:C1mebEgcGAMNb26L */
cali-tw-enide1990db225 all -- anywhere anywhere [goto] /* cali:63Pa1WWTgvksJ36o */
DROP all -- anywhere anywhere /* cali:EDuUTfjCvGbsCGlh */ /* Unknown interface */
Chain cali-to-wl-dispatch-e (1 references)
target prot opt source destination
cali-tw-enie1a26225264 all -- anywhere anywhere [goto] /* cali:_5rXJWuzmmyV3JKh */
cali-tw-enie2ff26c14db all -- anywhere anywhere [goto] /* cali:SZ6Cyh8N60mV86S1 */
cali-tw-enie4002b0da75 all -- anywhere anywhere [goto] /* cali:7gmutivwQuxcOOOe */
cali-tw-eniea63e57b06b all -- anywhere anywhere [goto] /* cali:CJYvsBKQIHe6dua2 */
cali-tw-enieb9ddca3403 all -- anywhere anywhere [goto] /* cali:TnDBJz8Y1DJmYVMY */
DROP all -- anywhere anywhere /* cali:nDeLofAAtscmYqE5 */ /* Unknown interface */
Chain cali-tw-eni0c6784024a8 (1 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere /* cali:CRZeomfKND1bO19Z */ ctstate RELATED,ESTABLISHED
DROP all -- anywhere anywhere /* cali:6e4sbFj5IKCekXeU */ ctstate INVALID
MARK all -- anywhere anywhere /* cali:P-GDyb7NuGhDMsQ7 */ MARK and 0xfffeffff
MARK all -- anywhere anywhere /* cali:EW-S8--SD8SP8P4y */ /* Start of policies */ MARK and 0xfffdffff
cali-pi-_3CJ_GmvE9pcCktVJ2ep all -- anywhere anywhere /* cali:wTd47W0BzG633rXL */ mark match 0x0/0x20000
RETURN all -- anywhere anywhere /* cali:AEzu2xsR7HccH54g */ /* Return if policy accepted */ mark match 0x10000/0x10000
DROP all -- anywhere anywhere /* cali:LCMGRBhWIpOFqUB4 */ /* Drop if no policies passed packet */ mark match 0x0/0x20000
cali-pri-_kJqfZpgUe7r2t4A-14 all -- anywhere anywhere /* cali:Gn2GbhyympDzT9HN */
RETURN all -- anywhere anywhere /* cali:U0g7j2-DAWhS_-HH */ /* Return if profile accepted */ mark match 0x10000/0x10000
cali-pri-_4yi5_iSUAwsU8zMHTk all -- anywhere anywhere /* cali:OZzG-NaQraiq0iZ_ */
RETURN all -- anywhere anywhere /* cali:9GciTqbKgOPmG6Oc */ /* Return if profile accepted */ mark match 0x10000/0x10000
DROP all -- anywhere anywhere /* cali:6O2OAUl6BbSsT75w */ /* Drop if no profiles matched */
Chain cali-tw-eni3e26bf888ec (1 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere /* cali:LD78W2k0rOuSVn2T */ ctstate RELATED,ESTABLISHED
DROP all -- anywhere anywhere /* cali:FlTAE-EOuTUwK85h */ ctstate INVALID
MARK all -- anywhere anywhere /* cali:ewZ-xm1zGPiqlN6L */ MARK and 0xfffeffff
cali-pri-kns.calico-system all -- anywhere anywhere /* cali:a_F1eyFZmFZQDZ0D */
RETURN all -- anywhere anywhere /* cali:roEFRHgTT0wtaeIJ */ /* Return if profile accepted */ mark match 0x10000/0x10000
cali-pri-_nzzjLvInId1gPHmQz_ all -- anywhere anywhere /* cali:13uGnpuZS0ht4fAP */
RETURN all -- anywhere anywhere /* cali:XY0JRNyOJSYtt9Ol */ /* Return if profile accepted */ mark match 0x10000/0x10000
DROP all -- anywhere anywhere /* cali:H5wFa5NqmMGVEwXR */ /* Drop if no profiles matched */
Chain cali-tw-eni3f83a050a8e (1 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere /* cali:BHhSwmJmmxvBJsX- */ ctstate RELATED,ESTABLISHED
DROP all -- anywhere anywhere /* cali:-IAif6jT9RfcbBbJ */ ctstate INVALID
MARK all -- anywhere anywhere /* cali:2--rknXnxMsAnqLB */ MARK and 0xfffeffff
MARK all -- anywhere anywhere /* cali:YP7WNhxRSckglm39 */ /* Start of policies */ MARK and 0xfffdffff
cali-pi-_3CJ_GmvE9pcCktVJ2ep all -- anywhere anywhere /* cali:NME87xv3ZPcg0vok */ mark match 0x0/0x20000
RETURN all -- anywhere anywhere /* cali:0dn5IxV8g9hDM5v8 */ /* Return if policy accepted */ mark match 0x10000/0x10000
DROP all -- anywhere anywhere /* cali:WCpnTkmEydFVdMhX */ /* Drop if no policies passed packet */ mark match 0x0/0x20000
cali-pri-_kJqfZpgUe7r2t4A-14 all -- anywhere anywhere /* cali:PYf8dpPwlVpeJc_A */
RETURN all -- anywhere anywhere /* cali:TP_041MDzlzaCEtA */ /* Return if profile accepted */ mark match 0x10000/0x10000
cali-pri-_4yi5_iSUAwsU8zMHTk all -- anywhere anywhere /* cali:NLpe0Fkf6B9JcFFO */
RETURN all -- anywhere anywhere /* cali:TsveHWlsQZ3_0lRG */ /* Return if profile accepted */ mark match 0x10000/0x10000
DROP all -- anywhere anywhere /* cali:JnCHHCBdzXFj-YA0 */ /* Drop if no profiles matched */
Chain cali-tw-eni88f62525f10 (1 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere /* cali:jlTkrHu5NtGFMbLh */ ctstate RELATED,ESTABLISHED
DROP all -- anywhere anywhere /* cali:Olpb7dol2ZIGgo41 */ ctstate INVALID
MARK all -- anywhere anywhere /* cali:wsTNpNGPyGtFugZf */ MARK and 0xfffeffff
cali-pri-_HaddP7gqEbX9JcSnYM all -- anywhere anywhere /* cali:ADaF6yYSocVKydDB */
RETURN all -- anywhere anywhere /* cali:aSbCl6OOl9TILHUC */ /* Return if profile accepted */ mark match 0x10000/0x10000
cali-pri-_Rnp_ESaIC8ZkKN1abf all -- anywhere anywhere /* cali:Osi_8yLWfsD6x0KO */
RETURN all -- anywhere anywhere /* cali:NHgS-uF-v1v24jbL */ /* Return if profile accepted */ mark match 0x10000/0x10000
DROP all -- anywhere anywhere /* cali:-MSCitKjVslERtow */ /* Drop if no profiles matched */
Chain cali-tw-enibb9994cc117 (1 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere /* cali:cXmgMrqLv18lpD-s */ ctstate RELATED,ESTABLISHED
DROP all -- anywhere anywhere /* cali:n28GlgJYx0VdKkKE */ ctstate INVALID
MARK all -- anywhere anywhere /* cali:xYDtS34R_dZrwGET */ MARK and 0xfffeffff
MARK all -- anywhere anywhere /* cali:QawjNbbRdtW15KG6 */ /* Start of policies */ MARK and 0xfffdffff
cali-pi-_DYtwCKMZFWyxK2RhACP all -- anywhere anywhere /* cali:rYjWtMya5MVONeA_ */ mark match 0x0/0x20000
RETURN all -- anywhere anywhere /* cali:f3Lcp66ksW43orQR */ /* Return if policy accepted */ mark match 0x10000/0x10000
DROP all -- anywhere anywhere /* cali:qQNQs_DjNEXASAoa */ /* Drop if no policies passed packet */ mark match 0x0/0x20000
cali-pri-kns.test all -- anywhere anywhere /* cali:hQt6azeGWDj8_Z64 */
RETURN all -- anywhere anywhere /* cali:MQPir_EKGueL-iIG */ /* Return if profile accepted */ mark match 0x10000/0x10000
cali-pri-ksa.test.default all -- anywhere anywhere /* cali:pZgRKfQR_cGF4cC9 */
RETURN all -- anywhere anywhere /* cali:tt4V5LyF6mUXE6Kd */ /* Return if profile accepted */ mark match 0x10000/0x10000
DROP all -- anywhere anywhere /* cali:OBq915g_G8BoSeHt */ /* Drop if no profiles matched */
Chain cali-tw-enicacbc4bcc26 (1 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere /* cali:IGh1Sc8B5vDmy3i- */ ctstate RELATED,ESTABLISHED
DROP all -- anywhere anywhere /* cali:L5WzF_ly98LhMoH2 */ ctstate INVALID
MARK all -- anywhere anywhere /* cali:YzPyLsnyYt73bWgL */ MARK and 0xfffeffff
cali-pri-_HaddP7gqEbX9JcSnYM all -- anywhere anywhere /* cali:pMPrRnc4cZfvzPK8 */
RETURN all -- anywhere anywhere /* cali:0OlvylHcD65pP7aM */ /* Return if profile accepted */ mark match 0x10000/0x10000
cali-pri-_Rnp_ESaIC8ZkKN1abf all -- anywhere anywhere /* cali:70zVKVlUx5mVXqxK */
RETURN all -- anywhere anywhere /* cali:Rh5o3NFI_5PPMDtB */ /* Return if profile accepted */ mark match 0x10000/0x10000
DROP all -- anywhere anywhere /* cali:b4ssTU7X-BkaCTCg */ /* Drop if no profiles matched */
Chain cali-tw-eniea63e57b06b (1 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere /* cali:ciGjidcREFqPMvNW */ ctstate RELATED,ESTABLISHED
DROP all -- anywhere anywhere /* cali:kivgHDNv9NgujZcJ */ ctstate INVALID
MARK all -- anywhere anywhere /* cali:e88cVDNdUL17l36U */ MARK and 0xfffeffff
cali-pri-_y7B7CuWD8IH_K3cXEw all -- anywhere anywhere /* cali:3x0X0XPQ-O2bLacs */
RETURN all -- anywhere anywhere /* cali:4B2HIa7QOwmq5spe */ /* Return if profile accepted */ mark match 0x10000/0x10000
cali-pri-_13i7kooWeuaj42zOEt all -- anywhere anywhere /* cali:oSblX0WGGKtO5UFY */
RETURN all -- anywhere anywhere /* cali:mVZMD8hxzu1f_TMp */ /* Return if profile accepted */ mark match 0x10000/0x10000
DROP all -- anywhere anywhere /* cali:nEVuxaVQyioS6q2o */ /* Drop if no profiles matched */
Chain cali-wl-to-host (1 references)
target prot opt source destination
cali-from-wl-dispatch all -- anywhere anywhere /* cali:Ee9Sbo10IpVujdIY */
ACCEPT all -- anywhere anywhere /* cali:nSZbcOoG1xPONxb8 */ /* Configured DefaultEndpointToHostAction */
cali-pi-_DYtwCKMZFWyxK2RhACP all -- anywhere anywhere /* cali:rYjWtMya5MVONeA_ */ mark match 0x0/0x20000
RETURN all -- anywhere anywhere /* cali:f3Lcp66ksW43orQR */ /* Return if policy accepted */ mark match 0x10000/0x10000
DROP all -- anywhere anywhere /* cali:qQNQs_DjNEXASAoa */ /* Drop if no policies passed packet */ mark match 0x0/0x20000
^ These look like the relevant chains to me. You can see that the policy exists and has the correct rules within it here:
Chain cali-pi-_DYtwCKMZFWyxK2RhACP (1 references)
target prot opt source destination
all -- anywhere anywhere /* cali:j53w2cHoSsmy_etr */ /* Policy test/knp.default.default-ingress-egress-deny ingress */
One thing that is missing in this output is a measure of which rules are being hit - I usually prefer to look at iptables-save -c
since it includes packet counters.
Hi,
so the only lines I found in the output of iptables-save -c
with the relevant chains in them are these:
:cali-pi-_DYtwCKMZFWyxK2RhACP - [0:0]
[0:0] -A cali-pi-_DYtwCKMZFWyxK2RhACP -m comment --comment "cali:j53w2cHoSsmy_etr" -m comment --comment "Policy test/knp.default.default-ingress-egress-deny ingress"
[0:0] -A cali-tw-enibb9994cc117 -m comment --comment "cali:rYjWtMya5MVONeA_" -m mark --mark 0x0/0x20000 -j cali-pi-_DYtwCKMZFWyxK2RhACP
Is that helpful for you?
It seems to suggest that no packets are reaching the default-deny rule, so are probably being accepted / handled earlier in iptables processing.
Okay, thats what I was suspecting. Why would that be?
Also, I stumbled upon logs like this, while researching something else:
timeout or abort while handling: method=GET URI="/apis/projectcalico.org/v3/networkpolicies?allowWatchBookmarks=true&resourceVersion=14995051&timeout=32s&watch=true"
This is from the apiserver. Is that relevant?
probably not - that error is probably just the API server's normal timeout for watches, which triggers Calico to restart the watch. Unless you're seeing error logs in the Calico logs that seem correlated, that's probably a red herring.
I think the next step here is just to collect all of the relevant diags at once so I can try to comb through for what might be wrong. Right now I'm missing a few things:
iptables-save -c
output for packet counts (after sending some traffic that you expect to be blocked). kubectl get networkpolicy -A -o yaml
I'll also try to find some time to repro this myself, but our automated EKS tests do verify this setup so I expect this might be something specific to your environment.
Hi, so I collected the data and create a few pastebins. Iptables Output: https://pastebin.com/LXM1JLaM
Calico/Node Logs (only from today, if you want all of them, I can post them, but its really a lot): https://pastebin.com/ZHUKDUkK
And the output of kubectl get networkpolicy -A -o yaml
:
apiVersion: v1
items:
- apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
creationTimestamp: "2022-04-14T09:18:58Z"
generation: 1
name: allow-apiserver
namespace: calico-apiserver
ownerReferences:
- apiVersion: operator.tigera.io/v1
blockOwnerDeletion: true
controller: true
kind: APIServer
name: default
uid: 0bf998b4-3696-4afc-aa64-b201e534f75d
resourceVersion: "3940390"
uid: 47e61da5-20ac-48d6-963a-f3d91663400d
spec:
ingress:
- ports:
- port: 5443
protocol: TCP
podSelector:
matchLabels:
apiserver: "true"
policyTypes:
- Ingress
- apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.k8s.io/v1","kind":"NetworkPolicy","metadata":{"annotations":{},"name":"default-ingress-egress-deny","namespace":"test"},"spec":{"podSelector":{"matchLabels":{"app":"nginx"}},"policyTypes":["Ingress"]}}
creationTimestamp: "2022-05-16T08:32:15Z"
generation: 1
name: default-ingress-egress-deny
namespace: test
resourceVersion: "23738856"
uid: 1efb2f34-37c6-4ab7-a22f-919087994bf6
spec:
podSelector:
matchLabels:
app: nginx
policyTypes:
- Ingress
kind: List
metadata:
resourceVersion: ""
selfLink: ""
Thanks a lot for looking into this!
[0:0] -A cali-FORWARD -i eni+ -m comment --comment "cali:zSuvpsKZMnLBOFg-" -j cali-from-wl-dispatch
[0:0] -A cali-FORWARD -o eni+ -m comment --comment "cali:ZD6sBvTDTQIcyWe3" -j cali-to-wl-dispatch
Looks like for whatever reason, the Calico to / from workload chains are being skipped entirely. These rules should be matched by any traffic going through any eni*
interfaces. Could you double check that your hosts:
So, I checked the nodes and they have multiple interfaces prefixed with "eni" as I expected. The pods however have multiple interfaces too. I did not check all pods, but the ones I checked had interfaces named like this: "eth0@if107" and "v4if0@if108".
So I guess the pods having more than one interface is a problem?
Potentially - what does your CNI config look like? Are you using multus or something similar to attach pods to multiple networks?
If the pod is sending traffic down an interface that enters the host on a veth (or other interface type) that Calico doesn't know about, it won't be able to enforce policy.
We use vpc-cni, which is installed as a EKS-addon (v1.10.3-eksbuild.1). The config is just the defaults.
Calico is deployed via helm (v3.23.1). The value that was set there is:
installation:
kubernetesProvider: EKS
Everything else is just the defaults.
Hi, can I do anything else to maybe get to the root of this problem? Shall I provide you with configs or something?
So I guess the pods having more than one interface is a problem? Yes it is. It suggests that you have multiple CNI plugins installed somehow.
aws-vpc-cni creates pod interfaces starting with eni
. Calico creates them starting with cali
. Not sure which plugin creates them like eth0@if107
or v4if0@if108
.
Can we see your CNI config file please? i.e. the file on your hosts in /etc/cni/net.d/
. Or multiple files if present.
I think we need to fix up your CNI config and disable the rogue plugin.
Might be unrelated, but I've seen reports recently where containerd and CRIO have started shipping with their own CNI plugin (which doesn't work in kubernetes).
I just wanted to provide you with the configs you asked, but when I connected to one the nodes there was no /etc/cni/net.d
.
I just found a snapshot-directory from containerd, projecting this directory into a pod.
So I guessed this would be mounted inside a aws-vpc-cni-pod.
And sure enough, inside those pods I can find a 10-aws.conflist
in the mounted directory:
{
"cniVersion": "0.4.0",
"name": "aws-cni",
"disableCheck": true,
"plugins": [
{
"name": "aws-cni",
"type": "aws-cni",
"vethPrefix": "eni",
"mtu": "9001",
"pluginLogFile": "/var/log/aws-routed-eni/plugin.log",
"pluginLogLevel": "DEBUG"
},
{
"name": "egress-v4-cni",
"type": "egress-v4-cni",
"mtu": 9001,
"enabled": "true",
"nodeIP": "x.x.x.x",
"ipam": {
"type": "host-local",
"ranges": [[{"subnet": "x.x.x.x/xx"}]],
"routes": [{"dst": "0.0.0.0/0"}],
"dataDir": "/run/cni/v6pd/egress-v4-ipam"
},
"pluginLogFile": "/var/log/aws-routed-eni/egress-v4-plugin.log",
"pluginLogLevel": "DEBUG"
},
{
"type": "portmap",
"capabilities": {"portMappings": true},
"snat": true
}
]
}
I wonder how any rogue cni might have gotten into the cluster though, as I only ever installed vpc-cni and calico, nothing else.
Also, thanks for your feedback :)
That file looks pretty close to the template in the vpc-cni repo: https://github.com/aws/amazon-vpc-cni-k8s/blob/master/misc/10-aws.conflist
but when I connected to one the nodes there was no /etc/cni/net.d. I just found a snapshot-directory from containerd, projecting this directory into a pod.
So we need to figure out where on your nodes the CNI config is - /etc/cni/net.d
is the default, clearly you're not using the default. The CNI plugin is run on the host directly (either by kubelet or by your container runtime), not inside a container or pod, so we need to find where your host is looking for CNI config and what its running. Look at kubelet and CNI configs and see if you can find that info.
So, I made a mistake when I first looked for the config.
In Bottlerocket OS, when you use the admin-container that does not mean you see the actual contents of the rootfs of the host.
But you can do that if you use sheltie
.
And it turns out that /etc/cni/net.d/
is present and contains the config I posted earlier.
Hello,
I just found this issue while troubleshooting a very similar behavior on our side and thought sharing might help. We are running EKS 1.21 with vpc-cni and Calico. We just migrated this morning from the old calico yaml files to calico helm chart 3.23.1 without major issues. (just a few hiccups with annotation and node selector being present and cpu limits making the readiness probe to always fail on the calico-node pods, to which we just removed the limits for now and it is fine now)
And when we tried to play with a basic namespace deny-all policy in order to validate that calico was still working, we had troubles :) So we created a basic http deployment+service+ingress answering 200/OK. Working without issues. Then we added a network policy to deny all traffic in the namespace which would block the traffic from nginx to the pod. But then nothing happened. We looked at the calico-node's log and we could see the update of the policy being mentionned. And finally after a good 2 minutes the traffic was blocked. So then we tried to remove the deny-all policy from the namespace and now the traffic is still stuck between nginx and the pod.
I checked /etc/cni/net.d/
and there is one file called 10-aws.conflist
with the following content:
{
"cniVersion": "0.3.1",
"name": "aws-cni",
"plugins": [
{
"name": "aws-cni",
"type": "aws-cni",
"vethPrefix": "eni",
"mtu": "9001",
"pluginLogFile": "/var/log/aws-routed-eni/plugin.log",
"pluginLogLevel": "DEBUG"
},
{
"type": "portmap",
"capabilities": {"portMappings": true},
"snat": true
}
]
}
I checked the pod interfaces as well and there is only 1 interface attached to the pod which ends with an interface called eni<something>
which looks like what you said it should look like.
I also checked the iptables-save -c
and found the rule but the packet counter are at 0 even though nginx is still unable to talk to the pod.
If there is tests you want me to perform, let me know :) Cheers
Any news on this? Is more info necessary on something?
Hey, is somebody still looking into this?
@Linutux42 did you maybe find a solution or workaround?
Hello, I'm having the same issue using EKS version 1.22 with amazon optimized AMI and VPC CNI + Calico plugin. I'd love to know if anyone has found out how to fix this.
@Linutux42 did you maybe find a solution or workaround?
@Furragen I haven't had time to investigate more for now, sorry :/
Hi, same problem here. I'm following the stars policy demo instructions, and when I get to adding the default-deny policy, it has no effect.
My environment is:
Hello,
I upgraded my EKS clusters to 1.22 a few days ago and I found a few minutes to test again the migration to the official tigera-operator helm chart v3.24.1. I tried to spawn a pod in a namespace with a deny all network policy and i couldn't reach anything within that pod. The second I removed the netpol everything was working as planned. So something fixed the issue between k8s 1.21 and 1.22 and between tigera-operator v3.21.4 and v3.24.1. I can't tell you what fixed it, or which of the 2 fixed it and unfortunately I don't have time to investigate further for the moment.
@Furragen Try with v3.24.1 and see if it works ¯\(ツ)/¯
any update on this ? I am stuck with the same issue and not able to enforce NetworkPolicy. It works sometimes but not all the times when I reapply.
Greetings,
We are having the same issue but only with one cluster out of two:
1) both EKS clusters are 1.21 2) identical in every configuration/services deployed 3) same tigera operator version/helm chart
Closing as stale, pls reopen if you are still seeing these issues.
EKS 1.27, VPC-CNI, Calico 3.26.1. Behavior seems to be quite random. Some policies work as expected. Some services just timeout without any policies and others work fine.
I am seeing this with EKS 1.24 and VPC-CNI with Calico 3.26.1... Exactly as @aadamovich describes some times things are blocked sometimes they are not.. results are very un-predicitable.
Official documentation now states:
You can't use IPv6 with the Calico network policy engine add-on.
@tomastigera can you please re-open the issue, at the present this is the only one I can find tracking IPv6 netpols with the EKS provider and it definitely doesn't work, even with ipv6Support=true
in the felixconfiguration.
In my case I am not using IPv6 followed Calico/AWS EKS doc to the T and nothing works... The two things that I wonder if its affecting it is use of secondary subnet 100.x.x.x subnets with PODS
- name: ENABLE_IPv6
value: "false"
- name: ENABLE_POD_ENI
value: "true"
- name: POD_SECURITY_GROUP_ENFORCING_MODE
value: standard
- name: ANNOTATE_POD_IP
value: "true"
- name: AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG
value: "true"
I also see the following errors in calico-node logs:
2023-07-28 17:13:11.977 [INFO][281] felix/int_dataplane.go 1836: Received proto.WorkloadEndpointUpdate update from calculation graph msg=id:<orchestrator_id:"k8s" workload_id:"default/mycurlpod" endpoint_id:"eth0" > endpoint:<state:"active" name:"calie49fb8bfb10" profile_ids:"kns.default" profile_ids:"ksa.default.default" ipv4_nets:"100.64.185.16/32" tiers:<name:"default" ingress_policies:"default.deny-app-policy" ingress_policies:"default/default.default-deny" egress_policies:"default.deny-app-policy" egress_policies:"default/default.default-deny" > > 2023-07-28 17:13:11.977 [INFO][281] felix/endpoint_mgr.go 602: Updating per-endpoint chains. id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"default/mycurlpod", EndpointId:"eth0"} 2023-07-28 17:13:11.977 [INFO][281] felix/table.go 508: Queueing update of chain. chainName="cali-tw-calie49fb8bfb10" ipVersion=0x4 table="filter" 2023-07-28 17:13:11.977 [INFO][281] felix/table.go 582: Chain became referenced, marking it for programming chainName="cali-pi-_X9zpILksT6bKpI67m0p" 2023-07-28 17:13:11.977 [INFO][281] felix/table.go 508: Queueing update of chain. chainName="cali-fw-calie49fb8bfb10" ipVersion=0x4 table="filter" 2023-07-28 17:13:11.977 [INFO][281] felix/table.go 582: Chain became referenced, marking it for programming chainName="cali-po-_X9zpILksT6bKpI67m0p" 2023-07-28 17:13:11.977 [INFO][281] felix/endpoint_mgr.go 648: Updating endpoint routes. id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"default/mycurlpod", EndpointId:"eth0"} 2023-07-28 17:13:11.977 [INFO][281] felix/endpoint_mgr.go 1283: Skipping configuration of interface because it is oper down. ifaceName="calie49fb8bfb10" 2023-07-28 17:13:11.977 [INFO][281] felix/endpoint_mgr.go 490: Re-evaluated workload endpoint status adminUp=true failed=false known=true operUp=false status="down" workloadEndpointID=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"default/mycurlpod", EndpointId:"eth0"} 2023-07-28 17:13:11.977 [INFO][281] felix/status_combiner.go 58: Storing endpoint status update ipVersion=0x4 status="down" workload=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"default/mycurlpod", EndpointId:"eth0"} 2023-07-28 17:13:11.978 [INFO][281] felix/route_table.go 1185: Failed to access interface because it doesn't exist. error=Link not found ifaceName="calie49fb8bfb10" ifaceRegex="^cali." ipVersion=0x4 tableIndex=254 2023-07-28 17:13:11.978 [INFO][281] felix/route_table.go 1253: Failed to get interface; it's down/gone. error=Link not found ifaceName="calie49fb8bfb10" ifaceRegex="^cali." ipVersion=0x4 tableIndex=254 2023-07-28 17:13:11.978 [INFO][281] felix/route_table.go 589: Interface missing, will retry if it appears. ifaceName="calie49fb8bfb10" ifaceRegex="^cali." ipVersion=0x4 tableIndex=254
Global Policy:
[root@ip-10-184-9-79 ~]# kubectl get globalnetworkpolicy -o yaml apiVersion: v1 items:
Does anyone know if this is the proper doc to follow to enable K8S Policies using Calico with EKS as AWS doesn't seem to reference purging aws-node daemonset.
https://docs.aws.amazon.com/eks/latest/userguide/calico.html - doesn't reference deleting "aws-node" daemonset
Where as the Calico Doc says purge "aws-node" - https://docs.tigera.io/calico/latest/getting-started/kubernetes/managed-public-cloud/eks#install-eks-with-amazon-vpc-networking
Once aws-node is purged and the rest of the instructions are followed here NetworkPolicies work...
Since this cluster will use Calico for networking, you must delete the aws-node daemon set to disable AWS VPC networking for pods.
kubectl delete daemonset -n kube-system aws-node
I think there might be some confusion.
https://docs.tigera.io/calico/latest/getting-started/kubernetes/managed-public-cloud/eks
talks about 2 completely different ways to use Calico with an EKS cluster.
One is combining Calico with AWS-VPC-CNI (which is installed by aws-node).
The other is removing AWS-VPC-CNI and replacing it with Calico CNI.
Calico with VPC-CNI installation instructions are here: https://docs.aws.amazon.com/eks/latest/userguide/calico.html If you're using this, you MUST NOT delete the aws-node daemonset.
Calico with Calico-cni installation instructions are here: https://docs.tigera.io/calico/latest/getting-started/kubernetes/managed-public-cloud/eks#install-eks-with-calico-networking
Both methods should "just work". Both methods have pros and cons (see the doc).
Hi
We are facing similar issues. sometimes connection was successful and sometimes it failed.. any response much appreciated
I have a EKS-cluster in which I deployed calico for networkpolicy enforcement. To test if everything works, I created a test namespace with a pod in it and created a networkpolicy, that would deny all ingress-traffic. If I now try to reach this pod from another pod in another namespace, it just works.
Expected Behavior
I expected the pod to not be reachable.
Current Behavior
The pod is reachable.
Possible Solution
Steps to Reproduce (for bugs)
Context
I want to isolate workloads in different namespace from eachother, which is not possible now.
Your Environment