Closed henryzhao95 closed 3 weeks ago
Yes, Calico is a stateful firewall, we track connections in the kernel's connection tracking "conntrack" table. You can see conntrack entries with conntrack -L
to list all or conntrack -E
to watch for changes.
The fact that the denied packets are in the reverse direction suggests that there was a previous connection that was being tracked but it was cleaned up. This could be for a few reasons:
net.netfilter.nf_conntrack_tcp_timeout_close = 10
net.netfilter.nf_conntrack_tcp_timeout_close_wait = 60
net.netfilter.nf_conntrack_tcp_timeout_established = 432000
net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_last_ack = 30
net.netfilter.nf_conntrack_tcp_timeout_max_retrans = 300
net.netfilter.nf_conntrack_tcp_timeout_syn_recv = 60
net.netfilter.nf_conntrack_tcp_timeout_syn_sent = 120
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_unacknowledged = 300
The net.netfilter.nf_conntrack_tcp_timeout_established timeout is the one for connections that were fully established. It is typically very long (days) but connections that are silent for a long time do hit it.
@henryzhao95 Closing this issue. Reach out if you have more information.
Expected Behavior
We have Argo CD running in numerous Kubernetes clusters. This includes:
argocd-redis-ha-server
StatefulSet pod withredis
container listening on 6379argocd-redis-ha-server
StatefulSet pod withsentinel
container listening on 26379argocd-redis-ha-haproxy
ReplicaSet pods withredis
container listening on ports 6379 and 9101, fronted by a Kubernetes serviceWe have Calico NetworkPolicies in place to allow the ingress to these ports, for example:
And so we expect Argo to work, with nothing being denied. (We have a log & deny all rule at the end too.)
Current Behavior
From time to time (like once a month for a cluster), randomly, on rare occasions not coinciding with new
calico-node
or Argo pods, we will see a burst of 3 of blocked Argo flows spaced roughly 100 seconds apart e.g. 1 at 4:57:39 pm, 1 at 4:59:19 pm, 1 at 5:01:00 pm.These blocked flows report the inverse of the flow we'd normally expect. e.g. Blocked:
argocd-redis-ha-server:26379 --> argocd-redis-ha-haproxy:40962
Expected flow:argocd-redis-ha-haproxy:40962 --> argocd-redis-ha-server:26379
e.g. Blocked:
argocd-redis-ha-server:6379 --> argocd-redis-ha-proxy:51418
Expected flow:argocd-redis-ha-proxy:51418 --> argocd-redis-ha-server:6379
I don't see anything in the Calico pod logs out of the ordinary. My understanding of networking is weak, but it feels like Calico which should be stateful, is potentially losing track of the state of the network flows? Is that possible? Or are there any other theories?
Possible Solution
Steps to Reproduce (for bugs)
1. 2. 3. 4.
Context
Your Environment