A demo of Consul and Terraform Sync on Google Cloud Platform (GCP). Most of the configuration is based on this guide. More detailed documentation can be found here.
Note: In this demo, we are using a standalone Consul-Terraform-Sync (CTS) setup. For production, it is recommended to run Consul-Terraform-Sync with high availability.
Before you begin, ensure the following tools are installed:
This module creates Google Cloud Platform (GCP) firewall rules for services discovered by Consul Terraform Sync (CTS). It specifically targets services with the name standalone/nginx
, as defined in cts-firewall.hcl. Each firewall rule is dynamically generated using the service’s IP address and tags retrieved from Consul, opening port 80 for instances with matching tags.
Service Filtering: The module filters services to include only those named standalone/nginx
, ensuring that only specific services are processed to keep firewall rules tightly controlled and relevant.
Firewall Rule Creation: For each qualifying service:
firewall-<service-node>
.nginx
service instance, further narrowing access.Automatic Scaling: As nginx
service instances scale up or down, CTS detects these changes and creates or removes firewall rules accordingly. This dynamic approach simplifies security management, allowing the firewall rules to adjust automatically based on real-time service discovery.
With 2 nginx
nodes, the following firewall rules are created:
This screenshot shows Consul nodes during operation:
This configuration demonstrates how CTS, combined with Terraform, can automate infrastructure updates based on real-time service changes. By defining rules based on service tags and IP addresses, firewall access scales dynamically as instances are added or removed, without manual intervention.
For larger setups or different use cases, firewall rules could be applied more broadly using tag-based targeting across multiple service types. This would allow more generalized rules (e.g., target groups for specific types of applications) without needing individual IP-based rules, balancing security and scalability.
This module creates Google Cloud Platform (GCP) load balancers for services discovered by Consul Terraform Sync (CTS). It specifically targets services with the name standalone/nginx
, as defined in cts-firewall.hcl. The load balancer is set up to route TCP traffic to nginx
instances based on their metadata retrieved from Consul.
Service Filtering: The module filters services to include only those named standalone/nginx
, ensuring that only specific services are processed for load balancing, which helps maintain a focused and efficient configuration.
Load Balancer Creation: For each qualifying service:
nginx-backend
.nginx
instances, ensuring traffic is only routed to healthy endpoints.Instance Group Management: An instance group is created that dynamically includes all nginx
service instances based on the metadata provided by Consul. This group ensures that the load balancer can scale automatically with the instances, distributing traffic evenly.
Automatic Scaling: As nginx
service instances scale up or down, CTS detects these changes, and the load balancer is automatically updated to reflect the current state of the service instances. This dynamic management simplifies infrastructure operations.
With nginx
instances in place, the following resources are created:
This screenshot shows the backend configuration:
The following shows the instance group when this is scaled out further
For larger setups or different use cases, load balancing rules could be applied to additional services or protocols, expanding the capabilities of the load balancer. Implementing multiple backend services could enhance redundancy and scalability, ensuring seamless application performance as demand fluctuates.
Authenticate with your GCP account and configure the project:
# Authenticate your GCP account
gcloud auth login
gcloud auth application-default login
# Set your Google Cloud project ID
gcloud config set project <PROJECT_ID>
Replace <PROJECT_ID>
with your GCP project ID.
Copy your Consul license file (consul.hclic
) to the root of your working directory:
cp ~/Downloads/consul.hclic .
Ensure the license file is present before building your images.
Use the provided script to set up necessary variables for the Packer build:
sh packer/set-vars.sh
The script will prompt you for your GCP project ID, region, and other details. By default, it uses London (europe-west2) as the region. Modify this as needed.
Once variables are set, use Packer to build the Consul server and client images. To update the version of Consul, modify the CONSUL_VERSION
in the provision-consul.sh script.
You can run both builds simultaneously using ./build-packer.sh
, or manually with the following commands:
# Initialize Packer
packer init packer/gcp-almalinux-consul-server.pkr.hcl
packer init packer/gcp-almalinux-nginx.pkr.hcl
packer init packer/gcp-almalinux-cts.pkr.hcl
# Build the Consul server image
packer build -var-file=variables.pkrvars.hcl packer/gcp-almalinux-consul-server.pkr.hcl
# Build the nginx server image
packer build -var-file=variables.pkrvars.hcl packer/gcp-almalinux-nginx.pkr.hcl
# Build the CTS server image
packer build -var-file=variables.pkrvars.hcl packer/gcp-almalinux-cts.pkr.hcl
Now use Terraform to provision a Consul cluster. This example creates a 3-node Consul server cluster. The terraform.tfvars
file is generated from the original variables.pkrvars.hcl
used during the Packer build.
# Create tfvars from pkrvars and provision the cluster
sed '/image_family.*/d' variables.pkrvars.hcl > tf/terraform.tfvars
cd tf
terraform init
terraform apply