Open shapirus opened 6 months ago
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Historically I have been setting nonMasqueradeCIDR to a value of my choice (e.g. 10.1.0.0/16) for the internal cluster address allocation to prevent any addresses from a non-private network space (10.64.0.0/whatever) from being used for any internal tasks.
It, however, failed to work for me when I tried to create a new cluster with kops 1.28.4: while debugging the CNI initialization issues, I noticed that there were multiple references to 100.64.x.x and 100.96.x.x addresses in the logs, so some behavior must have changed.
I then started to search on the internet what was going on only to discover that this is either not documented or is documented poorly and fragmentarily.
Here's what I was able to find so far:
Confusing information: it was stated (somewhere) that nonMasqueradeCIDR either must not, or is not recommended to, overlap the other two. This does not make sense: why cannot non-masqueraded routing be used for internal addresses?! I found that at some point nonMasqueradeCIDR stopped to be used for deriving the address space for pods and services. What is then its purpose now?
Another confusion is the rules for podCIDR and serviceClusterIPRange: can they be the same? Can they overlap? What are their actual purpose? If they are both to be used to allocate addresses for pods and services in the internal k8s network, then why are they not called "podCIDR" and "serviceCIDR", or "podClusterIPRange" and "serviceClusterIPRange" to avoid confusion?
What is, generally, the recommended approach of setting a custom subnet for the internal k8s network addressing now?
None of this is documented properly, one would have to dig into the source code to understand it, or I must have failed miserably at searching for documentation.