fortinet / aws-cloudformation-templates

Cloud Formation Templates for getting you started in AWS with Fortinet.
MIT License
34 stars 67 forks source link

Dual AZ solution does not configure the Fabric Connector (sdn-connector) #4

Closed pmcevoy closed 4 years ago

pmcevoy commented 4 years ago

I'm confused - the FortiOS Cookbook indicates that the Fabric Connector must be setup before HA can work. However the UserData config file that is applied to the instance does not have a sdn-connector section. How is this supposed to work? I can see that the IAM role is applied to the instance but the active node (the only one I can get to start) does not have a Aws fabric connector configured.

hgaberra commented 4 years ago

You are correct, the documentation is a bit confusing and an internal documentation ticket has been created to correct this.

In AWS the FGT's receive permissions to make AWS EC2 API calls via the IAM Instance role created by CloudFormation and assigned to both instances.

You do not need to configure an SDN connector for FGCP failover to work in AWS. For AWS FGTs, this is purely for dynamic address objects for use within normal firewall policies, not for HA failover.

Another point to keep in mind is that the actual DNS resolution and AWS EC2 API calls are made out of the HAmgmt interface (ie port4\eni3), on the instance that is now becoming master. So it is important that the FGTs have the ability to reach the configured DNS servers and public AWS EC2 API endpoints through this interface.

You can test if the FGTs are able to get credentials via the IAM role and reach the AWS EC2 API with the following CLI commands:

Here is example output from a PAYG slave\FGT2 that is currently a slave. Reference the output about the IAM role ################### Fgt2 # get sys status | grep "^Version|^Current HA mode" Version: FortiGate-VM64-AWSONDEMAND v6.2.2,build1010,191008 (GA) Current HA mode: a-p, backup Fgt2 # diag deb enable

Fgt2 # diag deb app awsd -1 Debug messages will be on for 30 minutes.

Fgt2 # diag test app awsd

Test level. Fgt2 # diag test app awsd 0 1. list sdn connectors 2. list sdn filters 3. AWS API test 4. show HA status 99. restart Fgt2 # diag test app awsd 3 awsd get instance id i-08f462755eff2eecc awsd get iam role env1-gw-InstanceRole-1FYJCCISWHSJG awsd get region us-west-2 awsd get vpc id vpc-09d8a24bc1bab7bb0 Success Fgt2 # diag test app awsd 4 HA status: HA Active-passive slave awsd get instance id i-08f462755eff2eecc awsd get iam role env1-gw-InstanceRole-1FYJCCISWHSJG awsd get region us-west-2 awsd get vpc id vpc-09d8a24bc1bab7bb0 vpc id: vpc-09d8a24bc1bab7bb0 instance id: i-08f462755eff2eecc eni: eni-0004fc39e25edd2e3, IP: 10.0.10.10, MAC: 02:5a:57:6d:10:82, index: 0 eni: eni-0d37e9b5486926a57, IP: 10.0.20.10, MAC: 02:1b:93:09:48:d2, index: 1 eni: eni-09aec443ffa475515, IP: 10.0.30.10, MAC: 02:48:c2:44:f3:c4, index: 2 eni: eni-09e62184b4e53f771, IP: 10.0.40.10, MAC: 02:ed:ec:17:f7:bc, index: 3, elastic IP: 44.225.109.18 -------- master info: instance id: i-0b145538350f75c25 eni: eni-0c309c60ac51cce6c, IP: 10.0.1.10, MAC: 06:89:94:90:e6:ec, index: 0, elastic IP: 54.200.1.114 eni: eni-0d68e9626228cbf5d, IP: 10.0.2.10, MAC: 06:b5:35:6e:d5:4c, index: 1 eni: eni-000e162395757e6fa, IP: 10.0.3.10, MAC: 06:7a:dc:72:b8:5e, index: 2 eni: eni-06c6dcbf6fce3948a, IP: 10.0.4.10, MAC: 06:a2:8c:eb:18:72, index: 3, elastic IP: 100.21.183.205 Fgt2 # diag deb reset Fgt2 # diag deb disable ###################
pmcevoy commented 4 years ago

Ok - that's interesting about the sdn-connector not being needed for HA. I've got one setup anyway - but was only able to get it to go "green" when the management EIPs were moved off eni0/port1 to eni3/port4.

I ran the commands that you suggested and reading the output, I seem to be good:


Fgt0 # diag test application awsd 3
--
awsd get instance id i-0cc96xxxxxxxxxxxxxxxx
awsd get iam role DingEC2Fortigate
awsd get region eu-west-1
awsd get vpc id vpc-090xxxxxxxxxxxx
Success

In summary, I was able to get multi AZ HA running by converting these step-by-step instructions into a Terraform config. I used some information from the CloudFormation template to help author the TF config (happy to share if you like)

I noted that Step 7, the management EIP is assigned to eni0/port1 - primary IP.

I let the instances come fully alive. Once they are up and running, I run a second TF config to re-associate the management EIPs with the Management ENI (eni3/port4) (namely move the association from eni0 to eni3).

I confirm that I was able to run a successful failover using 6.2.2 build 1010