Open embik opened 1 year ago
Per private subnet, a NAT Gateway needs to exist in the public subnet counterpart.
You should also be able to reuse one NAT Gateway in all availability zones in the same region.
Per private subnet, a custom route table needs to route 0.0.0.0/0 to the NAT Gateway in the public subnet in the same AZ.
Same here, you should be able to use one route table for all private subnets.
Speaking from my experience, I think some warning that this can get expensive might be appreciated by users. NAT Gateways are notorious for their expensiveness, so I'm not sure if this is a setup that we want to enforce/recommend by default.
When using private node IPs (the default in KKP 2.23.0 since https://github.com/kubermatic/dashboard/pull/5938 (!)), the target VPC and its subnets in AWS need to fulfil certain criteria. We need to document those criteria.
As far as I could say from a Friday afternoon experiment, the following needs to be given:
0.0.0.0/0
to the NAT Gateway in the public subnet in the same AZ.If any of those are not given, nodes either do not join the cluster due to lack of internet access or creating a
LoadBalancer
service doesn't work (the ELB needs a public subnet in the same AZ to route traffic to the nodes in the private subnet).For dualstack, additional requirements (egress only gateway?) might be necessary.