Open a-nldisr opened 4 years ago
Cloud Posse is grateful to @paulrob-100 for all the work that went into documenting the differences between our 3 similar modules. It was helpful at the time (and continues to be helpful) for sorting out which module to use and why, and it also brought attention to the fact that Cloud Posse has 3 very similar modules for not exactly sustainable reasons.
This update is to announce a major update to terraform-aws-dynamic-subnets that makes the original post out of date with respect to it, and also to recommend that people migrate to that module and away from the others so we can avoid triplicating our efforts to maintain these modules.
The dynamic-subnets
module now supports IPv6 and many more options. We would prefer that missing options (except for enhancements to NAT Instance support, which we want to freeze) be added to dynamic-subnets
first.
I have slightly updated the post below, adding v2
where features have been added in dynamic-subnets
version 2.
count
vs for_each
Some discussion has been made about whether each module uses count or for_each when allocating multiple instances of a resource. To clarify: |
Pro | Con | |
---|---|---|---|
for_each |
No changes when the set of items remains the same, but the order of items in a list changes. | Keys must be known at the time terraform plan is run, which means they cannot be in any way dependent on resources created by the plan. |
|
count |
Values need not be known at the time terraform plan is run, only the number of values needs to be known |
Resources deleted and recreated when index of elements changes. Removing the first item in the list causes all remaining resources to be deleted and recreated. |
As a general guideline, Cloud Posse prefers to use for_each
when the keys are expected to be created by the user and provided as static input known at plan
time, but uses count
when we expect some users will be generating the keys (things like availability zone names, IPAM CIDRs, or route table IDs) in the same plan.
count
and for_each
.There are 3 cloudposse modules for creating subnets:
This question has come up a number of times in the sweetops slack channel. Most recently it was discussed in https://sweetops.slack.com/archives/CB6GHNLG0/p1620376237322900
I have made an attempt to summarize the differences here.
This table summarizes the attributes supported by the underlying aws_subnet resource
I have split each module into public (pub) and private (prv) attribute support since all modules differentiate between public and private subnets.
The auto suffix means the cidr_block is auto-calculated based on module inputs, where the limit on the number of extra bits in the subnet calculation is determined by var.max_subnets
. (dynamic-subnets
version 2 supports passing in CIDR blocks of your choice instead if you prefer.)
Feature | dynamic-subnets-pub | dynamic-subnets-prv | multi-az-subnets-pub | multi-as-subnets-prv | named-subnets-pub | named-subnets-prv |
---|---|---|---|---|---|---|
availability_zone | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
availability_zone_id | v2 | v2 | :x: | :x: | :x: | :x: |
cidr_block | :heavy_check_mark:(auto) | :heavy_check_mark:(auto) | :heavy_check_mark:(auto) | :heavy_check_mark:(auto) | :heavy_check_mark: | :heavy_check_mark: |
customer_owned_ipv4_pool | :x: | :x: | :x: | :x: | :x: | :x: |
ipv6_cidr_block | v2 | v2 | :x: | :x: | :x: | :x: |
map_customer_owned_ip_on_launch | :x: | :x: | :x: | :x: | :x: | :x: |
map_public_ip_on_launch | :heavy_check_mark: | :x: | #49 | :x: | :heavy_check_mark: | :heavy_check_mark::question: |
outpost_arn | :x: | :x: | :x: | :x: | :x: | :x: |
assign_ipv6_address_on_creation | v2 | v2 | :x: | :x: | :x: | :x: |
vpc_id | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
tags | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
:warning: enabling map_public_ip_on_launch
on the named-subnets module private subnets is likely to be a mistake.
Feature | dynamic-subnets-pub | dynamic-subnets-prv | multi-az-subnets-pub | multi-as-subnets-prv | named-subnets-pub | named-subnets-prv |
---|---|---|---|---|---|---|
count or for_each on aws_subnet | count | count | for_each | for_each | count | count |
managed route table | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
number of route tables | 1 (option in v2) | num AZs | num AZs | num AZs | num subnet_names | num subnet_names |
external route table associations | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
managed igw route | optional | :x: | :heavy_check_mark: | :x: | :heavy_check_mark: | :x: |
managed nacl | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
external nacl | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
managed nat gateway | optional | :x: | optional | :x: | optional | :x: |
managed nat gateway eip | optional | :x: | optional | :x: | optional | :x: |
managed nat gateway route | v2 | optional | :x: | optional | :x: | :heavy_check_mark: |
label/null tags style | modern | modern | modern | modern | old | old |
addition of VPC cidr subnets | external module for_each, external routes (more options in v2) | external module for_each, external routes | external module for_each, external routes | external module for_each, external routes | external module for_each, external routes | external module for_each, external routes |
:information_source: I have inferred these based on code inspection. Where the included examples tests don't explicitly test for the confguration, I recommend testing to confirm support.
See public only for a diagram.
Module | Supports Single AZ | Supports Multi-AZ |
---|---|---|
dynamic-subnets | :heavy_check_mark: | :heavy_check_mark: |
multi-az-subnets | :heavy_check_mark: | :heavy_check_mark: |
named-subnets | :heavy_check_mark: | external for_each |
See public private for a diagram.
Module | Supports Single AZ | Supports Multi-AZ |
---|---|---|
dynamic-subnets | :heavy_check_mark: | :heavy_check_mark: |
multi-az-subnets | :heavy_check_mark: | :heavy_check_mark: |
named-subnets | :heavy_check_mark: | external for_each |
Similar to public private diagram but no public subnet. Private subnet routes to internet via NAT gw. Alternatively private subnets routes to on-prem via VGW.
Module | Supports Single AZ | Supports Multi-AZ |
---|---|---|
dynamic-subnets | v2 | v2 |
multi-az-subnets | :heavy_check_mark: | :heavy_check_mark: |
named-subnets | :heavy_check_mark: | external for_each |
Similar to public private diagram but no public subnet and multiple private subnets with varying CIDR ranges. Routes to internet via NAT gw. Alternatively private subnets routes to on-prem via VGW. Subnet cidr ranges are parameterised.
Module | Supports Single AZ | Supports Multi-AZ |
---|---|---|
dynamic-subnets | v2 | v2 |
multi-az-subnets | :heavy_check_mark: | :heavy_check_mark: |
named-subnets | :heavy_check_mark: | external for_each |
See public private vpn for a diagram
Module | Supports Single AZ | Supports Multi-AZ |
---|---|---|
dynamic-subnets | external private route table | external private route table |
multi-az-subnets | external private route table | external private route table |
named-subnets | external private route table | external module for_each, external private route table |
See advanced configurations for diagrams.
The spoke VPC configurations may not include public subnets since they use the Gateway Load Balancer to route north/south traffic via an inspection VPC.
The module must therefore support an optional internet gateway (ie optional public subnets) and external management of route tables to be able to build these configurations.
Module | Supports Single AZ | Supports Multi-AZ |
---|---|---|
dynamic-subnets | v2 via external routes | v2 via external routes |
multi-az-subnets | external private route table | external private route table |
named-subnets | external private route table | external module for_each, external private route table |
The low barrier to entry terraform-aws-dynamic-subnets
module is perfect as a starter/utility/shared services VPC and has the best support from Cloud Posse. It splits the VPC CIDR range equally into public and private subnets. This means you are likely to choose a larger CIDR range due to generally needing more ips for applications in the private subnet ranges. Also if you use VPC endpoints, they would probably be located in the public subnets since there are likely spare ips in those subnets.
If you prefer to use smaller VPC CIDR ranges and make best use of the available CIDR range by having smaller public subnets and relatively larger private subnets, then you need terraform-aws-multi-az-subnets
or terraform-aws-named-subnets
or to compute the CIDRs yourself and feed them to terraform-aws-dynamic-subnets
v2.
A common design is to split the private subnets into application, data and vpc endpoints. This is a safety design such that, for example, scaling lambda VPC ips cannot exhaust the data subnet range.
Also a security design since AWS service traffic and gateway traffic is routed internally to the AWS backbone (which can also lead to a design with no NAT gateway).
Separating VPC endpoints subnets allows NACL rules and/or security groups on the endpoints.
I noted that terraform-aws-dynamic-subnets and terraform-aws-named-subnets uses count
on the underlying aws_subnet
resource, whereas terraform-aws-multi-az-subnets uses for_each
. The latter is more robust under configuration changes, though many configuration changes would not likely be performed since they force a destroy/recreate of the underlying resource. The former is more forgiving in handling resources created at the same time as the subnets.
You can see that dynamic-subnets and multi-az-subnets are very similar. When #49 is merged, the underlying resource attribute support will be identical. dynamic-subnets optimizes the number of public subnet route tables since only one is necessary for routes to the internet gateway. (With version 2, dynamic-subnets adds IPv6 support and other features making it far more capable than dynamic-subnets v1 or multi-az-subnets v1.)
All modules have missing attribute support on the underlying subnet resource, and will likely lead to duplicate PRs on each module in future. Cloud Posse asks for PRs to target dynamic-subnets and people to migrate to it.
To minimize maintenance overhead, it might be beneficial to merge these 3 modules together, with a submodule handling the full range of resource attributes, and other sub-modules handling the common patterns.
The named-subnets module is limited to a single AZ only, which means users would need to use an outer for_each
to achieve the same designs as its multi-az siblings. Limiting your VPC to a single AZ isn't recommended normally since the entire application availability would be lost during an AZ availability incident. Since the dynamic-subnets and multi-az-subnets both support a single AZ configuration, it's not clear how many users would opt for this module.
I hope this helps. Any other comments from the community welcome :raised_hands:
@osterman the above documentation from @paulrob-100 definitely needs to go somewhere. He did a solid review. What are your thoughts on where that should live? I'm happy to move it wherever.
This org offers two repo's with Terraform modules that almost seem to do similar things: https://github.com/cloudposse/terraform-aws-dynamic-subnets and this repo. Is it an idea to document what each repo is intended for?
From a first glance both create subnets in multi AZ, with NAT gateways and with public/private subnets, functionality seems to be similar.
The ec2-instance and ec2-instance-group repo describes their role and made it easy for me to decide without reading all the code, this is very friendly to new users.