Open defo89 opened 10 months ago
While digging through the issues I stumbled across this comment https://github.com/projectcalico/calico/issues/2900#issuecomment-537688234
After adding USE_POD_CIDR=true
to calico/node DS and typha deployment, I observe the intended behaviour: /32s pod IPs are not announced and /24 podCIDR prefix is.
switch # show bgp vrf net-mgmt:net-mgmt ipv4 unicast neighbors 10.10.10.136 routes
Peer 10.10.10.136 routes for address family IPv4 Unicast:
BGP table version is 1973, local router ID is 10.10.10.253
Status: s-suppressed, x-deleted, S-stale, d-dampened, h-history, *-valid, >-best
Path type: i-internal, e-external, c-confed, l-local, a-aggregate, r-redist, I-injected
Origin codes: i - IGP, e - EGP, ? - incomplete, | - multipath, & - backup
Network Next Hop Metric LocPrf Weight Path
*>i10.11.11.224/27 10.10.10.136 100 0 i
*>i100.64.13.0/24 10.10.10.136 100 0 i
I hope this is an expected fix and this option will not be deprecated (since I could not find it in docs)?
Curious that it isn't in the docs, but yes - if you're using host-local IPAM and not using Calico IPAM, then you should set USE_POD_CIDR=true
to get the behavior you're expecting. It isn't a deprecated field. You can also see that our tigera/operator code sets the env var: https://github.com/tigera/operator/blob/1ccfdba7c9ddc0eb558e5829998546a6d9aedf64/pkg/render/node.go#L1478-L1485
Sounds like a case of missing docs - perhaps after the somewhat recent docs refactor (CC @ctauchen)
For completeness, when using Calico IPAM, route aggregation happens without the need for this environment variable because Calico is in control of where IPs and blocks are allocated within the cluster and doens't need to be told to use the node.Spec.PodCIDR
alternative.
Thank you @caseydavenport, also for confirming the non-deprecation of the field.
Expected Behavior
I wasn't able to find expected behaviour in existing issues - apologies if I missed an obvious config thing.
Given that Pod IP's don't move between nodes, it would make sense for each node to advertise it's pod subnet for easier troubleshooting and smaller routing table sizes in upstream switching fabrics.
Current Behavior
Currently the routing table has individual Pod IPs and no /24 PodCIDR. All /32's are exported upstream.
Upstream switch/bgp peer gets:
Possible Solution
Advertise an aggregate/summary that is equal to node's
.spec.podCIDR
Your Environment
Native routing with BGP, no IPIP/vxlan, pod ipam is host-local.