Open kashook opened 7 years ago
I am assigning myself, as I may need this functionality as well. More than that I need to re-read this awesome issue :)
So, in short, this is a request to add the ability to specify one or more extra routes to add to the subnets created by kops to point them to a pre-existing VPN gateway (while simultaneously allowing for specifying a pre-existing NAT gateway as you can today).
I am not a guru with re-using subnets. @geojaz can you bring your own routes and subnets? How well is this stuff documented? One of the things I need to work on is diagraming our different topologies options.
@keiths-osc I I know there is use cases where companies have AWS groups that do not want k8s to have access to subnets or any networking changes. VPC and networking topology can be tightly controlled. Any other options or use cases you would like to see?
@chrislovecnm thanks for picking this up! Besides the ability to specify extra routes that point to existing VPN gateways, I believe the ability to add routes that point to existing VPC peering connections would also be useful.
I know your question about documentation wasn't addressed to me, but for what it's worth, I did find this document to be pretty useful. Using pre-existing subnets did seem to work but does have one problem. In order for the LoadBalancer
Kubernetes services to work correctly, the subnets have to be tagged with the name of the Kubernetes cluster. If you add these tags to your preexisting subnets, then I noticed that kops wants to destroy the subnets if you destroy the cluster. (I haven't looked to see if an issue exists for this or not).
Just wanted to say this would be must appreciated.. we're having to add the routes manually or with python after creating the cluster and that just feels dirty.
Just had to do something similar, and @starkers - I also feel dirty.
@keiths-osc I created subnets for Kops within an existing VPC also, and had to tag them afterwards.
It would be nice to let Kops manage the subnets, but didn't you experience subnets IP range conflicts when letting Kops create the subnets for you? (This PR tries to fix this). Also, it's nice to be able to control the size of the subnets IP range (ex /22
, or 20
, else Kops decides for you).
Also, there's a PR for the issue you mentioned ("kops wants to destroy the subnets if you destroy the cluster"): https://github.com/kubernetes/kops/pull/2666).
@kenden kops did pick subnet ranges that conflicted. I edited the cluster config before actually creating the cluster to pick the CIDRs I wanted. It would be nice for sure if kops could pick ranges that don't overlap automatically, but the fact that I could at least fix the config before creating the cluster and then let kops still create everything was good enough for my current purposes.
Someone on office hours mentioned that they were going to work on this. Anyone know if work is underway on this item?
I have not worked on it. I still want the feature though. :) I recently came across another thing that would be nice if kops could set up. In AWS it's possible to create a direct private connection between subnets in your VPC and other AWS resources (such as s3). (See this AWS article). Unlike with VPN gateways or VPN peering connections, you don't actually directly add routes to the route tables. Instead, you create a thing called a VPC Endpoint, and then add routes to the endpoint. Doing this ends up adding a special route to each route table that's included in the endpoint. It would be nice if kops could create VPC endpoints. (It of course doesn't necessarily have to be done at the same time as the other items requested by this issue). I have been playing around with creating a VPC endpoint and putting the kops-created route tables in it manually, and it seems to work (but it would be nice if it could be done by kops). VPC Endpoints can't be tagged, so I tagged the route tables with the id of the VPC endpoint that I put them in.
My work on phases will allow networking patterns like these
/assign
Experiencing the same problem.. Any other workarounds except manually editing route tables or python scripting? Does it matter which networking mode I choose?
No updates at this time.
Oh and networking mode does not impact this.
Anyone using aws cli to add this?
just using a very dirty dirty boto script for now I'm afraid, happy to dig it up and share of you need
https://github.com/kubernetes/kops/issues/2214 will address some of this
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/lifecycle frozen /remove-lifecycle stale
Have the same problem. All S3 traffic from private subnets are routed through NAT Gateways rather than a VPC endpoint for S3. Nat Gateways cost almost 5 times more than conventional private link traffic. The only solution I could find for now was to add the routes for newly created S3 endpoints manually :( Is there any other solution directly from kops @chrislovecnm
+1 on this, managing clusters from an existing VPC and supplying kops with a new, empty VPC to manage all objects within seems to be a pattern people are trying to follow
+1 this. We have old legecy system that need to routing to K8s
+1 this. We also have old legecy system that need to routing to K8s
+1 to this. We're moving services for various legacy vpc's to a kubernetes cluster and we need to add individual routes manually - repeatedly.
I need this too
+1 on this. we deploy mongodb in a separte VPC, and right now we need to manually update the route tables to allow for the peering connection. Would be great to have this managed with Kops
aws Route Propagation vpc->route tables -> select a table -> pick tab "Route Propagation" and enable the option fixed my problem
It would be nice both to list a set of gateways to enable route propagation on as well as adding additional routes e.g for VPC peering.
yes it would and a "Route Propagation" knob would be a good start, an add routes yaml chunk would also be good. They are complementary to each other
+1 for this. I need to add an AWS vpce gateway to the k8s subnet route tables. Would be great if kops managed this so I don't have to do some awscli hackery after kops create cluster.
[https://docs.aws.amazon.com/vpc/latest/userguide/vpce-gateway.html]
@tewing-riffyn maybe some terraform hackery might be better, just a thought
+1 on this. we deploy mongodb in a separte VPC, and right now we need to manually update the route tables to allow for the peering connection. Would be great to have this managed with Kops
just encountered the same thing with a kops-provisioned cluster and configuring access to mongodb atlas
Just going to pile on and say that my tooling around kops has a bunch of post-deploy patching to route tables, to do exactly what others are talking about here. (Add vpce-, pcx- and other types of route table entries).
Are there any current efforts to revive this? Looks like most attempts in the past have been abandoned.
I think this would be pretty straight forward to add. Kops sets up one private route table per zone in addition to one public route table. Given that ClusterSpec doesn't define zones directly, rather they define a list of subnets of which each is assigned a zone, I think it makes the most sense to define additional routes as a new top-level field.
The new API fields could look something like:
type: ClusterSpec
spec:
...
routes:
- routeTable: us-east-1a
cidr: 1.2.3.4/5
target: pcx-12345
Valid values of the routeTable
field are any zones defined in the cluster's subnets or public
.
Any thoughts on this? Is this flexible enough to handle everyone's use-cases, yet still intuitive?
@starkers I would be interested in a script.
I think this would be pretty straight forward to add. Kops sets up one private route table per zone in addition to one public route table. Given that ClusterSpec doesn't define zones directly, rather they define a list of subnets of which each is assigned a zone, I think it makes the most sense to define additional routes as a new top-level field.
The new API fields could look something like:
type: ClusterSpec spec: ... routes: - routeTable: us-east-1a cidr: 1.2.3.4/5 target: pcx-12345
Valid values of the
routeTable
field are any zones defined in the cluster's subnets orpublic
.Any thoughts on this? Is this flexible enough to handle everyone's use-cases, yet still intuitive?
I'm assuming something like this would also allow more route tables to be created by kops
within the same zone? Versus complaining if a user tries to create two different route tables in the same zone. Example:
type: ClusterSpec
spec:
...
routes:
- routeTable: us-east-1a
cidr: 10.1.0.0/16
target: pcx-12345
- routeTable: us-east-1a
cidr: 10.2.0.0/16
target: vpc-6789
If yes, then a more appropriate name for routeTable
might be routeTableZone
, or make routeTable a header for
cidr,
targetand an additional
zonefield. It might be helpful to add a
namefield like we do for subnets, so users can more easily identify the route tables/associations that
kops` is creating and managing in their accounts.
If no to the above question, then this is a dealbreaker for us. We create many different subnets in each zone for different instance groups. Some of these subnets shouldn't have access/routes to certain CIDR ranges, whereas others should. So we'd want a way to specify which kops
subnets have the routes added to the route tables that kops
creates/manages, if that makes sense.
kOps creates one route table for public subnets, and one table per private subnets. routeTable
in the example above will only reference the name of the table that kOps create, not create arbitrary new ones. For those who let kOps provision networking, that should be fine.
Those who need more advanced setups should rather pre-create the networks using more suitable tools and reference them as explained here https://kops.sigs.k8s.io/run_in_existing_vpc/
As @rifelpet and @olemarkus said, kOps creates a route table for each Private subnet. For these we could easily add routes like this:
type: ClusterSpec
spec:
subnets:
- cidr: 10.4.0.0/24
name: eu-central-1a
type: Private
zone: eu-central-1a
routes:
- cidr: 10.1.0.0/16
target: pcx-12345
- cidr: 10.2.0.0/16
target: vpc-6789
Maybe it would be easier create a route table for each Public subnet also and then add new routes to each subnet as desired.
+1 Any updates on this? 🙏
As usual, we are happy to review a PR. But at least for me, I have other priorities for the coming releases.
We have a pre-existing VPC in AWS with a VPN connection between our company's network and the VPC. I have been able to successfully create a completely private Kubernetes cluster with kops by ensuring that I:
Internal
in the kops cluster configurationprivate
in the kops cluster configuration (via flags tokops create cluster
)do not pass the
--bastion
flag tokops create cluster
so a bastion is not createdI found in the run_in_existing_vpc document that I can use pre-existing subnets in the cluster configuration. I gave this a try. The route tables for the subnets I made have routes that point to our VPN gateway. The kops utility subnets share a route table that points to an Internet gateway for the default route, and the private subnets each have a route table that points to a NAT gateway for the default route. This all appears to have worked well. I have been able to create
LoadBalancer
type Kubernetes services configured so that they create private load balancers, and I can access the services via the VPN. (Note I did make sure to add the subnet tags mentioned by this issue to ensure the load balancer ends up in one of the private subnets).Rather than creating the Kubernetes resources in our pre-existing subnets, we would prefer to keep the kops cluster separated in its own dedicated subnets that are ideally managed by kops. It would be nice if kops could create the subnets and their corresponding route tables with the VPN gateway routes added. The run_in_existing_vpc document shows how to have kops create the subnets for you but use pre-existing NAT gateways by specifying an
egress
setting on the private subnets. I gave this a try, and kops created all the subnets for me in our existing VPC and pointed the route tables to the NAT gateways I specified. I then manually added the VPN gateway routes to both the utility and private subnet route tables that kops created (and the tags mentioned in this issue) and found that all seems to work fine. Without the VPN gateway routes, I can't communicate with the masters or nodes over the VPN because the networking on the AWS side doesn't know how to route the response back to me.So, in short, this is a request to add the ability to specify one or more extra routes to add to the subnets created by kops to point them to a pre-existing VPN gateway (while simultaneously allowing for specifying a pre-existing NAT gateway as you can today).