Closed jeffyjf closed 6 months ago
I think CPO expects CAPO or other deployment tool to manage this. Why can't it be done there?
I think CPO expects CAPO or other deployment tool to manage this. Why can't it be done there?
As I mentioned here, This is route controller's duty to ensure the containers on different nodes in one Kubernetes cluster can communicate with each other.
@jeffyjf are you sure that the problem lies in node's security groups? could it be related to #2491?
@jeffyjf are you sure that the problem lies in node's security groups?
Yep, I'm sure. I've already done test, add extra node's security group can deal with this issue.
could it be related to #2491?
They are different issues. For ingress network traffic, the AllowedAddressPairs
just used to check destination address, the SecurityGroupRule
used to check source address. They must all be set for a new node.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
I useed CAPO deployed a two nodes cluser. And start OCCM with below configurations:
And the CNI configurations like below:
But, the pods of node1 cannot access the pods of node 2 and vice versa.
What you expected to happen:
All of the pods can access each other.
How to reproduce it:
Deploy a multiple nodes cluser and config OCCM and CNI plugin as above.
Anything else we need to know?:
IMO, This due to the node's security group has no ingress rule to permit the network packet of other node's pods through directly
Environment: