projectcalico / calico

Cloud native networking and network security
https://docs.tigera.io/calico/latest/about/
Apache License 2.0
6.04k stars 1.35k forks source link

TCP timeout in idle session on Calico eBPF. #9372

Open xander-sh opened 1 month ago

xander-sh commented 1 month ago

Expected Behavior

Current Behavior

We noticed that if there is a long period of inactivity in packet exchange within a TCP session, newly sent packets (also within this TCP session) do not reach the recipient. The packets are visible as outgoing on the source side but are absent on the other side. This leads to packet retransmissions (TCP retransmissions) and a prolonged period of session closure from the packet source (RTO). Meanwhile, the socket through which this session was established remains open.

Additionally, the debug logs of calico-node show the following messages:
Updating/creating an entry in calico contrack.

2024-10-08T16:46:11.803233776+03:00 stdout F 2024-10-08 13:46:11.803 [DEBUG][45] felix/scanner.go 103: Examining conntrack entry entry=Entry{Type:0, Created:42928133306698124, LastSeen:42928133353918870, Flags: <none> Data: {A2B:{Bytes:0 Packets:0 Seqno:2327455211 SynSeen:true AckSeen:true FinSeen:false RstSeen:false Approved:true Opener:true Ifindex:23} B2A:{Bytes:0 Packets:0 Seqno:257842865 SynSeen:true AckSeen:true FinSeen:false RstSeen:false Approved:true Opener:false Ifindex:11} OrigDst:0.0.0.0 OrigSrc:0.0.0.0 OrigPort:0 OrigSPort:0 TunIP:0.0.0.0}} key=ConntrackKey{proto=6 10.222.41.81:37738 <-> 10.222.2.82:50051}

Delete map records from calico cntrack

2024-10-08T17:45:59.306023078+03:00 stdout F 2024-10-08 14:45:59.304 [DEBUG][45] felix/scanner.go 103: Examining conntrack entry entry=Entry{Type:0, Created:42928133306698124, LastSeen:42928133353918870, Flags: <none> Data: {A2B:{Bytes:0 Packets:0 Seqno:2327455211 SynSeen:true AckSeen:true FinSeen:false RstSeen:false Approved:true Opener:true Ifindex:23} B2A:{Bytes:0 Packets:0 Seqno:257842865 SynSeen:true AckSeen:true FinSeen:false RstSeen:false Approved:true Opener:false Ifindex:11} OrigDst:0.0.0.0 OrigSrc:0.0.0.0 OrigPort:0 OrigSPort:0 TunIP:0.0.0.0}} key=ConntrackKey{proto=6 10.222.41.81:37738 <-> 10.222.2.82:50051}
2024-10-08T17:45:59.306105006+03:00 stdout F 2024-10-08 14:45:59.304 [DEBUG][45] felix/cleanup.go 135: Deleting expired normal conntrack entry reason="no traffic on established flow for too long"
2024-10-08T17:45:59.306118295+03:00 stdout F 2024-10-08 14:45:59.305 [DEBUG][45] felix/scanner.go 109: Deleting conntrack entry.
2024-10-08T17:45:59.306127297+03:00 stdout F 2024-10-08 14:45:59.305 [DEBUG][45] felix/syscall.go 159: DeleteMapEntry(30, [6 0 0 0 10 222 41 81 10 222 2 82 106 147 131 195])
2024-10-08T17:45:59.306136025+03:00 stdout F 2024-10-08 14:45:59.305 [DEBUG][45] felix/syscall.go 119: Map metadata fd=0x1e mapInfo=&maps.MapInfo{Type:1, KeySize:16, ValueSize:88, MaxEntries:512000} 

After analyzing the code, we found variables that determine the time after which an idle TCP session is terminated. TCPEstablished https://github.com/projectcalico/calico/blob/e15aeccf9d81a25a69a2bd8ab604cf8e039e0351/felix/bpf/conntrack/cleanup.go#L135

https://github.com/projectcalico/calico/blob/fbd2c734ddefc99d5dca5540f70e49ca43e22b64/felix/bpf/conntrack/cleanup.go#L48C3-L48C17

func DefaultTimeouts() Timeouts {
    return Timeouts{
        CreationGracePeriod: 10 * time.Second,
        TCPPreEstablished:   20 * time.Second,
        TCPEstablished:      time.Hour,
        TCPFinsSeen:         30 * time.Second,
        TCPResetSeen:        40 * time.Second,
        UDPLastSeen:         60 * time.Second,
        GenericIPLastSeen:   600 * time.Second,
        ICMPLastSeen:        5 * time.Second,
    }
}

Possible Solution

Steps to Reproduce (for bugs)

Deploy any clent/server appliance, establish tcp session and do not send network packets for 60 minutes. After that all new outgoing network packets are drop inside calico conntrack

Context

Unpredictable behavior of applications during long-lived TCP sessions that do not use mechanisms like tcp_keepalive , gRPC pings, etc.

Your Environment

fasaxc commented 1 month ago

Would be great to have configuration for those, I've also considered defaulting to the values used in sysctls so we pick up the values that would be used by Linux.

That said, when running on a platform like k8s where pods are ephemeral, I strongly recommend using keepalives of some kind. It's always possible for traffic to get lost somewhere if a node or network element fails. If you can't detect that end-to-end then eventually you'll hit this problem.

We could try to do policy for mid-flow packets that have lost their conntrack entry, bu that depends on the original sender to be the one that sends the next packet (otherwise it'll look like a new flow in the opposite direction, which may have different policy).