fortinet / fortigate-autoscale-gcp

A collection of Node.js modules and cloud specific templates which support basic autoscale functionality for groups of FortiGate VM instances via Google Cloud Functions
1 stars 7 forks source link

Internal Load Balancer for egress #29

Open mrdevnull opened 2 years ago

mrdevnull commented 2 years ago

This project comes with an internal load balancer definition, which initially I was expecting to cater for egress traffic originating from behind the firewall. But on further inspection appears to complete the ingress path in front of a receiving instance group.

What's the expected method for egress traffic through the firewall to the internet originating from the instance group when autoscaling?

Joel-Cripps commented 2 years ago

It goes through Cloud Nat by default.


### Cloud Nat ###
# Allows for egress traffic on Protected Subnet
resource "google_compute_router_nat" "cloud_nat" {
  name                               = "${var.cluster_name}-cloud-nat-${random_string.random_name_post.result}"
  router                             = "${google_compute_router.protected_subnet_router.name}"
  region                             = "${var.region}"
  nat_ip_allocate_option             = "AUTO_ONLY"
  source_subnetwork_ip_ranges_to_nat = "ALL_SUBNETWORKS_ALL_IP_RANGES"

There are a number of use cases for what people put behind the scaling group, so we leave that up to the end user. If it is useful though we could potentially add something in the future as an example.

The following HA Active Passive uses egress through the FortiGates, but with no internal Load balancer, but it should give an idea of what the setup for egress looks like.

https://github.com/fortinet/fortigate-terraform-deploy/tree/main/gcp/7.0/ha

mrdevnull commented 2 years ago

Hi @Joel-Cripps, thanks for the response.

Looking at the terraform for the ha multi-zone configuration and the docs here

https://docs.fortinet.com/document/fortigate-public-cloud/7.0.0/gcp-administration-guide/698355/deploying-fortigate-vm-ha-on-gcp-between-multiple-zones

the HA setup appears to be very different to this autoscaling config.

I'm sure you know this better than me, but the HA configuration has one active host backed by a secondary when the primary is unavailable. Then there's some trickery to switch the backend route next hop when the primary changes.

This autoscaling implementation shouldn't have this concept? All servers behind the LB should be available for use. So I was expecting really only one solution for egress via the fortigate hosts - another LB, this time internal, attached to the same managed instance group. This would then become the next hope reference for outgoing traffic. Are there other options for this set up?

Thanks.

Joel-Cripps commented 2 years ago

I'm sure you know this better than me, but the HA configuration has one active host backed by a secondary when the primary is unavailable. Then there's some trickery to switch the backend route next hop when the primary changes.

Yes you are correct. In an A-P cluster there is an SDN connector which changes the routes/floating IP.

They are very different setups. The point is that one has some instructions on getting egress from instances back through your FortiGates. In the case of GCP autoscaling you can do egress through a cloud nat, a single node or back through the cluster via an internal LB.

We have a task for someone on our team to add the egress through the autoscale group as part of the default terraform script. But probably that won't be ready for a couple of weeks.

mrdevnull commented 2 years ago

Thanks for the confirmation @Joel-Cripps.

I look forward to the updated terraform with egress autoscale group. :)

I've not tried an internal LB along with an external LB on the same managed instance group (MIG). Is that the plan for your implementation, or just to have a separate MIG with internal LB dedicated to egress?

mrdevnull commented 2 years ago

Hi @Joel-Cripps, any update on the implementation of the egress autoscale group?

Joel-Cripps commented 2 years ago

Everyone got waylaid before the holidays. Someone from our team will be looking into it now that everyone is back.