Closed ratanGit closed 2 months ago
Hi Ratan,
I just ran this at 13:30PM ET today with the current revision of what's found in GitHub and was able to successfully bootstrap and deploy both instances without issue:
fgtvm:
fgtvm2:
I have not had a bootstrap failure since the implementation of the s3 functions. Have you made any changes in your version of the codebase?
Cheers,
Hi Alex. I found the issue. Your JSONencode{} in the fgtvm*.tf are ahrdcoded to canada-central-1 I used variable - var. region from your variable file. All good. Thank you so much for sharing your code!
Fixed. I originally just built this as a proof of concept to prove that it was possible to perform the bootstrapping via s3, I didn't plan for anyone to use these templates. With that said, they are public, so I have removed other hard coded references for the license file and region.
user_data = jsonencode({
bucket = aws_s3_bucket.s3_bucket.id,
region = var.region,
license = var.licenses[0],
config = "/fgtvm.conf"
})
I also fixed the aws_s3_objects to use the variable values rather than the hardcoded names:
resource "aws_s3_object" "lic1" {
bucket = aws_s3_bucket.s3_bucket.bucket
key = var.licenses[0]
source = var.licenses[0]
etag = filemd5(var.licenses[0])
}
resource "aws_s3_object" "lic2" {
bucket = aws_s3_bucket.s3_bucket.bucket
key = var.licenses[1]
source = var.licenses[1]
etag = filemd5(var.licenses[1])
}
A related question. Now that you have your S3 endpoint is it really necessary to have the HA-Sync subnet as "public"? I mean we are not using the igw?
Or is it the requirement for individual FW management?
Hi Ratan, In a nut shell, if you don't need to manage the devices from the internet, and you use private endpoints you could remove the EIPs on those HA/MGMT interfaces. Below is more detailed information:
There is an option to deploy the FortiGates to where the FGCP HA management interface (ie ENI2\port3) can access AWS EC2 API via private VPC endpoints and would not require dedicated EIPs. However, this comes with caveats to consider.
First, a dedicated method of access to the FortiGate instances needs to be setup to allow dedicated access to the HAmgmt interfaces. This method of access should not use the master FortiGate instance so that either instance can be accessed regardless of the cluster status. Examples of dedicated access are Direct Connect, IPsec VPN connections to an attached AWS VPN Gateway, or using Transit Gateway. Reference AWS Documentation for further information.
Second, the FortiGates should be configured to use the ‘169.254.169.253’ IP address for the AWS intrinsic DNS server as the primary DNS server to allow proper resolution of AWS API hostnames during failover to a new master FortiGate. Here is an example of how to configure this with CLI commands:
config system dns
set primary 169.254.169.253
end
Finally, the VPC interface endpoint needs to be deployed into both of the HAmgmt subnets and must also have ‘Private DNS’ enabled to allow DNS resolution of the default AWS EC2 API public hostname to the private IP address of the VPC endpoint. This means that the VPC also needs to have both DNS resolution and hostname options enabled as well. Reference AWS Documentation for further information.
I liked the way you added the S3 endpoint. Not sure why the Fws are failing to access the S3 bucket.
I see the bucket and the route too
------------------please feel ignore this if it is unsolicited, thought to let you know in case it is useful.