Closed raykrueger closed 7 years ago
$ kube-aws version
kube-aws version v0.7.1
@colhom should confirm, but my understanding is that kube-aws wants to create the subnets you give it, and doesn't handle the case where they already exist.
Supporting existing subnets is marked as a maybe in the production deployment checklist: https://github.com/coreos/coreos-kubernetes/issues/340. Perhaps we should consider making it a priority since it's mentioned in #510 as well? If it's important to you, it'd be great if you could leave a comment there.
edit: as it is now, you'd have to give it a subnet that doesn't exist yet, so maybe something like 10.215.67.0/24
. 10.215.66.0/26 and 10.215.66.0/28 don't work because they would overlap 10.215.66.0/24.
Supporting the existing subnet is extremely important for us. We have a hybrid infrastructure with a lot of routing tied to the existing subnet.
I swear their used to be a subnetId in the yaml.
@raykrueger I'd make a request slash appeal to @colhom on #340 about this. We are looking at alternatives currently - may be if there is a standard route table that all new subnets could be attached to, I don't think this is that much of an issue, except that we'd need to do some additional plumbing for VPN tunnels and peering connections.
We also have routing that we want the kube-aws subnets to support. We handle that using the existing 'routeTableId' feature, where we can specify the routing table to be used for the subnets that kube-aws creates. In the interim, could that work for you also @raykrueger ?
# ID of existing route table in existing VPC to attach subnet to. Leave blank to use the VPC's main route table.
routeTableId: rtb-12345678
If you have unchangeable remote routing/IPSec tied to 10.215.66.0/24, you could divide it in your AWS VPC into smaller subnets, e.g. 10.215.66.0/23 and 10.215.66.128/23, but as @cgag mentioned, you can't create a smaller subnet which overlaps with the existing 10.215.66.0/24 subnet.
Alternatively I see on #340 that @harsha-y successfully editing the 'stack-template.json' after 'kube-aws render' to get CloudFormation to use their existing subnets. If you do that and create a diff patch, you can likely reapply those changes each time you run 'kube-aws render'. That's what I do at the moment for my custom tweaks.
@whereisaaron that is the workflow we envisioned.
In fact, when it comes time to upstream your changes, you can just:
cp stack-template.json $KUBE_AWS_DIR/pkg/config/templates/
after you apply your diff on top of what the HEAD of master renders!
Supporting the existing subnet is extremely important for us. We have a hybrid infrastructure with a lot of routing tied to the existing subnet.
:+1: Hybrid infrastructure here w/o the ability to create / modify network. Would be great to be reuse existing subnet (and iam roles too).
We also need to be able to use an existing subnet. But I'd be curious if that's bad practice for some reason.
@mattjonesorg depends on what your expectations are w.r.t network isolation. A few AWS-specific things to remember about interfaces in general attached to the same subnet:
1) AWS Network isolation tools like ACL's and route tables consider the subnet to be atomic unit of association. You basically have to treat all members of a subnet homogenously when using these tools- there is no way to "tag" them.
2). If you're going to use stock AWS-provided DHCP (vast majority of folks), all you interfaces are going to get addresses out of the same DHCP pool with no distinction.
3). These network interfaces will all share a common broadcast address. This means that a misbehaving/malicious component can subject it's subnet neighboors to fun stuff such as flooding/spoofing the ARP table
If you're OK with the above points, then it's not a bad idea at all! It's all about the use case.
\cc @brianredbeard
I'm not sure how they conflict. I've tried smaller ranges like /24 /26 /28, I don't know what this wants from me right now.