To do this on AWS, the nebari config file will look something like the below:
amazon_web_services:
terraform_overrides:
existing_subnet_ids: ["subnet-0bf040134a53b8a6c", "subnet-0c9817baf30a85128"] # These are your private subnets where you've set up a NAT Gateway in the routing table and removed the internet gateway. Picture below.
existing_security_group_id: "sg-0efb1b832e3540289"
eks_endpoint_private_access: true
Routing table example:
It's important to note that the security group entered for existing_security_group_id must have an inbound rule that allows traffic with any source within the VPC's CIDR block (example of adding this in the image below). The exact range will depend on the VPC's CIDR block. When I failed to do this, the jupyterhub-sftp helm chart failed to deploy b/c it couldn't mount the EFS drive to the jupyterhub-sftp pod.
Alternatively we could just make private subnets the default, but it'll be a breaking change (cluster will need to be destroyed/recreated) for users updating.
Preliminary Checks
Summary
https://www.nebari.dev/docs/explanations/custom-overrides-configuration#deployment-inside-a-virtual-private-network details how to deploy nebari within a private subnet for Azure and GCP. We should add a section for AWS.
To do this on AWS, the nebari config file will look something like the below:
Routing table example:
It's important to note that the security group entered for existing_security_group_id must have an inbound rule that allows traffic with any source within the VPC's CIDR block (example of adding this in the image below). The exact range will depend on the VPC's CIDR block. When I failed to do this, the jupyterhub-sftp helm chart failed to deploy b/c it couldn't mount the EFS drive to the jupyterhub-sftp pod.
This is also dependent on https://github.com/nebari-dev/nebari/pull/1841 getting merged.