equinix-labs / terraform-equinix-metal-nutanix-cluster

Nutanix Cluster on Equinix Metal
https://deploy.equinix.com/labs/terraform-equinix-metal-nutanix-cluster/
Apache License 2.0
2 stars 6 forks source link

examples: Connect multiple Nutanix sites and migrate VMs between them #68

Closed displague closed 2 months ago

displague commented 4 months ago

Create an example/ which creates two Nutanix clusters. For purposes of the demo and limitations on availability, these sites may be in the same physical location. Setup a protection policy between those clusters. Create a VM in one of the clusters and migrate to the other.

https://www.youtube.com/watch?v=aUD26EJmtIc&t=30s

This may not be a fully automatable example, it may be supported by example Terraform to create the multiple sites and any network resources needed to connect those environments securely and reliably (such as a Fabric connection connecting the VLANs, extending different VRF ranges to each cluster). The README.md will go over what is automated and what needs to be done manually.

The example/README.md would take advantage of much of the same instruction provided in https://equinix-labs.github.io/nutanix-on-equinix-metal-workshop/.

codinja1188 commented 4 months ago

// assign

codinja1188 commented 4 months ago

@displague ,

How you expecting the example, Is it like steps to be collected in README?

displague commented 4 months ago

Yes, I would imagine the artifact of this looking like:

codinja1188 commented 4 months ago

@displague ,

I can able to provison two cluters unable to connect bastion host

Vasubabus-MacBook-Pro:nutanix-clusters vasubabu$ ssh -i $(terraform output -raw nutanix_cluster1_ssh_private_key) root@$(terraform output -raw nutanix_cluster1_bastion_public_ip) root@d0b9e9ad-9beb-4ef1-8395-595b79347845@sos.sl1.platformequinix.com: Permission denied (publickey).

displague commented 4 months ago

If you provisioned the node in the last 24 hours, you can use console.equinix.com to see the root password and you should be able to login with that. Once logged in, check to see if your SSH key is included in ~/.ssh/authorized_keys

The Nutanix Cluster is provisioned using the bastion host with this SSH key. If the cluster successfully provisioned then this key was working at some point.

displague commented 4 months ago

d0b9e9ad-9beb-4ef1-8395-595b79347845@sos.sl1.platformequinix.com

This is not the bastion public IP. This is the SOS user@host. Either the output variable is misnamed, including the wrong value, or what you copy/pasted into the comment is not accurate.

codinja1188 commented 4 months ago

@displague ,

In the demo https://www.youtube.com/watch?v=aUD26EJmtIc&t=30s the protection policy is already configured, Can you have any example steps to create a protection policy.

displague commented 4 months ago

protection policy

I don't have background on how to configure Nutanix protection policies. docs.nutanix.com and the community is where I would search.

codinja1188 commented 4 months ago

@displague ,

nutanix_cluster1_virtual_ip_address = "192.168.103.254"
.....
nutanix_cluster2_virtual_ip_address = "192.168.103.254"

How to avoid this common virtual_ip_address for both clusters, Is different subnets address solves the problem ?

displague commented 4 months ago

@codinja1188 yes. Setting a different cluster_subnet for each cluster (per #75) should solve that. You'll want to make sure that the two clusters can reach each other. These would need to be two IP reservations on the same VRF ID (one equinix_metal_vrf shared between them. Perhaps vrf_id is an optional parameter and when that is supplied equinix_metal_vrf becomes count=0 and the IP reservation uses the supplied vrf_id instead of the one that would have been created.

codinja1188 commented 4 months ago

@displague ,

observed one new issue

https://github.com/equinix-labs/terraform-equinix-metal-nutanix-cluster/issues/77

codinja1188 commented 4 months ago

@displague,

I am trying to create a remote config in the cluster1 but unfortunately cluster is not accessable, Can you help me what is the IP which is accessable to external

nutanix_cluster1_bastion_public_ip = "145.40.91.33"
nutanix_cluster1_cvim_ip_address = "192.168.101.121"
nutanix_cluster1_iscsi_data_services_ip = "192.168.103.253"
nutanix_cluster1_prism_central_ip_address = "192.168.103.252"
nutanix_cluster1_ssh_forward_command = "ssh -L 9440:192.168.101.121:9440 -L 19440:192.168.103.252:9440 -i /Users/vasubabu/Equinix/terraform-equinix-metal-nutanix-cluster/examples/nutanix-clusters/ssh-key-y8c21 root@145.40.91.33"
nutanix_cluster1_ssh_private_key = "/Users/vasubabu/Equinix/terraform-equinix-metal-nutanix-cluster/examples/nutanix-clusters/ssh-key-y8c21"
nutanix_cluster1_virtual_ip_address = "192.168.103.254"
displague commented 4 months ago

The bastion public IPs are the only public addresses in either cluster.

If both clusters are sharing the same VRF, the nodes in both clusters should be able to reach other by having an OS level route to the Metal Gateway for the whole VRF CIDR (not just the part of the VRF assigned to their cluster).

codinja1188 commented 3 months ago

@displague ,

Looks like dhcp lease is failed to create, Can you check and point me failure.

module.nutanix_cluster2.null_resource.wait_for_dhcp: Creating...
2024-06-19T16:45:54.755+0530 [INFO]  Starting apply for module.nutanix_cluster2.null_resource.wait_for_dhcp
2024-06-19T16:45:54.755+0530 [DEBUG] module.nutanix_cluster2.null_resource.wait_for_dhcp: applying the planned Create change
module.nutanix_cluster2.null_resource.wait_for_dhcp: Provisioning with 'file'...
2024-06-19T16:45:54.758+0530 [INFO]  using private key for authentication
2024-06-19T16:45:54.760+0530 [DEBUG] Connecting to 145.40.91.141:22 for SSH
2024-06-19T16:45:54.885+0530 [DEBUG] Connection established. Handshaking for user root
2024-06-19T16:45:55.799+0530 [DEBUG] starting ssh KeepAlives
2024-06-19T16:45:55.799+0530 [DEBUG] opening new ssh session
2024-06-19T16:45:56.045+0530 [DEBUG] Starting remote scp process:  'scp' -vt /root
2024-06-19T16:45:56.169+0530 [DEBUG] Started SCP session, beginning transfers...
2024-06-19T16:45:56.170+0530 [DEBUG] Beginning file upload...
2024-06-19T16:45:56.293+0530 [DEBUG] SCP session complete, closing stdin pipe.
2024-06-19T16:45:56.293+0530 [DEBUG] Waiting for SSH session to complete.
2024-06-19T16:45:56.418+0530 [ERROR] scp stderr: "Sink: C0644 1633 dhcp-check.sh\nscp: debug1: fd 0 clearing O_NONBLOCK\r\n"
module.nutanix_cluster2.null_resource.wait_for_dhcp: Provisioning with 'remote-exec'...
2024-06-19T16:45:56.420+0530 [INFO]  using private key for authentication
module.nutanix_cluster2.null_resource.wait_for_dhcp (remote-exec): Connecting to remote host via SSH...
module.nutanix_cluster2.null_resource.wait_for_dhcp (remote-exec):   Host: 145.40.91.141
module.nutanix_cluster2.null_resource.wait_for_dhcp (remote-exec):   User: root
module.nutanix_cluster2.null_resource.wait_for_dhcp (remote-exec):   Password: false
module.nutanix_cluster2.null_resource.wait_for_dhcp (remote-exec):   Private key: true
module.nutanix_cluster2.null_resource.wait_for_dhcp (remote-exec):   Certificate: false
module.nutanix_cluster2.null_resource.wait_for_dhcp (remote-exec):   SSH Agent: false
module.nutanix_cluster2.null_resource.wait_for_dhcp (remote-exec):   Checking Host Key: false
module.nutanix_cluster2.null_resource.wait_for_dhcp (remote-exec):   Target Platform: unix
2024-06-19T16:45:56.421+0530 [DEBUG] Connecting to 145.40.91.141:22 for SSH
2024-06-19T16:45:56.539+0530 [DEBUG] Connection established. Handshaking for user root
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [20s elapsed]
module.nutanix_cluster2.null_resource.wait_for_dhcp (remote-exec): Connected!
2024-06-19T16:45:57.400+0530 [DEBUG] starting ssh KeepAlives
2024-06-19T16:45:57.400+0530 [DEBUG] opening new ssh session
2024-06-19T16:45:57.630+0530 [DEBUG] Starting remote scp process:  'scp' -vt /tmp
2024-06-19T16:45:57.747+0530 [DEBUG] Started SCP session, beginning transfers...
2024-06-19T16:45:57.747+0530 [DEBUG] Beginning file upload...
2024-06-19T16:45:57.863+0530 [DEBUG] SCP session complete, closing stdin pipe.
2024-06-19T16:45:57.863+0530 [DEBUG] Waiting for SSH session to complete.
2024-06-19T16:45:57.980+0530 [ERROR] scp stderr: "Sink: C0644 38 terraform_946180736.sh\nscp: debug1: fd 0 clearing O_NONBLOCK\r\n"
2024-06-19T16:45:57.980+0530 [DEBUG] opening new ssh session
2024-06-19T16:45:58.215+0530 [DEBUG] starting remote command: chmod 0777 /tmp/terraform_946180736.sh
2024-06-19T16:45:58.339+0530 [DEBUG] remote command exited with '0': chmod 0777 /tmp/terraform_946180736.sh
2024-06-19T16:45:58.339+0530 [DEBUG] opening new ssh session
2024-06-19T16:45:58.570+0530 [DEBUG] starting remote command: /tmp/terraform_946180736.sh
module.nutanix_cluster2.null_resource.wait_for_dhcp (remote-exec): Found /var/lib/misc/dnsmasq.leases. Examining leases.
module.nutanix_cluster2.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 0...
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 0...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [10s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [30s elapsed]
module.nutanix_cluster2.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 0...
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 0...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [20s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [40s elapsed]
module.nutanix_cluster2.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 0...
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 0...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [30s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [50s elapsed]
module.nutanix_cluster2.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 0...
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 0...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [40s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [1m0s elapsed]
module.nutanix_cluster2.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 0...
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 0...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [50s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [1m10s elapsed]
module.nutanix_cluster2.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 0...
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 0...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [1m0s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [1m20s elapsed]
module.nutanix_cluster2.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 0...
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 0...
2024-06-19T16:47:04.135+0530 [ERROR] no reply from ssh server
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [1m10s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [1m30s elapsed]
module.nutanix_cluster2.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 0...
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 0...
2024-06-19T16:47:11.825+0530 [ERROR] no reply from ssh server
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [1m20s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [1m40s elapsed]
module.nutanix_cluster2.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 0...
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 0...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [1m30s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [1m50s elapsed]
module.nutanix_cluster2.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 0...
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 0...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [1m40s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [2m0s elapsed]
module.nutanix_cluster2.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 0...
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 0...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [1m50s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [2m10s elapsed]
module.nutanix_cluster2.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 0...
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 0...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [2m0s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [2m20s elapsed]
module.nutanix_cluster2.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 0...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [2m10s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [2m30s elapsed]
module.nutanix_cluster2.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 0...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [2m20s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [2m40s elapsed]
module.nutanix_cluster2.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [2m30s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [2m50s elapsed]
module.nutanix_cluster2.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [2m40s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [3m0s elapsed]
module.nutanix_cluster2.null_resource.wait_for_dhcp (remote-exec): Found the expected 2 leases in /var/lib/misc/dnsmasq.leases.
module.nutanix_cluster2.null_resource.wait_for_dhcp (remote-exec): Sleeping for five minutes to let cluster networking stabilize
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [2m50s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [3m10s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [3m0s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [3m20s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [3m10s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [3m30s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [3m20s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [3m40s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [3m30s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [3m50s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [3m40s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [4m0s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [3m50s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [4m10s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [4m0s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [4m20s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [4m10s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [4m30s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [4m20s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [4m40s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [4m30s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [4m50s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [4m40s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [5m0s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [4m50s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [5m10s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [5m0s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [5m20s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [5m10s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [5m30s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [5m20s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [5m40s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [5m30s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [5m50s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [5m40s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [6m0s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [5m50s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [6m10s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [6m0s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [6m20s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [6m10s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [6m30s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [6m20s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [6m40s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [6m30s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [6m50s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [6m40s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [7m0s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [6m50s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [7m10s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [7m0s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [7m20s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [7m10s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [7m30s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [7m20s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [7m40s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [7m30s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [7m50s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster2.null_resource.wait_for_dhcp: Still creating... [7m40s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [8m0s elapsed]
2024-06-19T16:53:38.817+0530 [DEBUG] remote command exited with '0': /tmp/terraform_946180736.sh
2024-06-19T16:53:38.817+0530 [DEBUG] opening new ssh session
2024-06-19T16:53:38.933+0530 [DEBUG] Starting remote scp process:  'scp' -vt /tmp
2024-06-19T16:53:39.049+0530 [DEBUG] Started SCP session, beginning transfers...
2024-06-19T16:53:39.049+0530 [DEBUG] Copying input data into temporary file so we can read the length
2024-06-19T16:53:39.054+0530 [DEBUG] Beginning file upload...
2024-06-19T16:53:39.173+0530 [DEBUG] SCP session complete, closing stdin pipe.
2024-06-19T16:53:39.173+0530 [DEBUG] Waiting for SSH session to complete.
2024-06-19T16:53:39.290+0530 [ERROR] scp stderr: "Sink: C0644 0 terraform_946180736.sh\nscp: debug1: fd 0 clearing O_NONBLOCK\r\n"
module.nutanix_cluster2.null_resource.wait_for_dhcp: Creation complete after 7m44s [id=6248356769317088071]
2024-06-19T16:53:39.304+0530 [DEBUG] provider.terraform-provider-null_v3.2.2_x5: Marking Computed attributes with null configuration values as unknown (known after apply) in the plan to prevent potential Terraform errors: tf_req_id=7c070167-8154-72d8-9bea-74d7dda299fc tf_resource_type=null_resource @module=sdk.framework tf_provider_addr=registry.terraform.io/hashicorp/null @caller=github.com/hashicorp/terraform-plugin-framework@v1.4.2/internal/fwserver/server_planresourcechange.go:195 tf_rpc=PlanResourceChange timestamp="2024-06-19T16:53:39.304+0530"
2024-06-19T16:53:39.304+0530 [DEBUG] provider.terraform-provider-null_v3.2.2_x5: marking computed attribute that is null in the config as unknown: @caller=github.com/hashicorp/terraform-plugin-framework@v1.4.2/internal/fwserver/server_planresourcechange.go:399 tf_provider_addr=registry.terraform.io/hashicorp/null tf_req_id=7c070167-8154-72d8-9bea-74d7dda299fc tf_rpc=PlanResourceChange @module=sdk.framework tf_attribute_path="AttributeName(\"id\")" tf_resource_type=null_resource timestamp="2024-06-19T16:53:39.304+0530"
module.nutanix_cluster2.null_resource.finalize_cluster[0]: Creating...
2024-06-19T16:53:39.305+0530 [INFO]  Starting apply for module.nutanix_cluster2.null_resource.finalize_cluster[0]
2024-06-19T16:53:39.305+0530 [DEBUG] module.nutanix_cluster2.null_resource.finalize_cluster[0]: applying the planned Create change
module.nutanix_cluster2.null_resource.finalize_cluster[0]: Provisioning with 'file'...
2024-06-19T16:53:39.309+0530 [INFO]  using private key for authentication
2024-06-19T16:53:39.310+0530 [DEBUG] Connecting to 145.40.91.141:22 for SSH
2024-06-19T16:53:39.430+0530 [DEBUG] Connection established. Handshaking for user root
2024-06-19T16:53:40.304+0530 [DEBUG] starting ssh KeepAlives
2024-06-19T16:53:40.305+0530 [DEBUG] opening new ssh session
2024-06-19T16:53:40.662+0530 [DEBUG] Starting remote scp process:  'scp' -vt /root
2024-06-19T16:53:40.783+0530 [DEBUG] Started SCP session, beginning transfers...
2024-06-19T16:53:40.783+0530 [DEBUG] Beginning file upload...
2024-06-19T16:53:40.903+0530 [DEBUG] SCP session complete, closing stdin pipe.
2024-06-19T16:53:40.903+0530 [DEBUG] Waiting for SSH session to complete.
2024-06-19T16:53:41.024+0530 [ERROR] scp stderr: "Sink: C0644 498 create-cluster.sh\nscp: debug1: fd 0 clearing O_NONBLOCK\r\n"
module.nutanix_cluster2.null_resource.finalize_cluster[0]: Provisioning with 'remote-exec'...
2024-06-19T16:53:41.028+0530 [INFO]  using private key for authentication
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): Connecting to remote host via SSH...
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec):   Host: 145.40.91.141
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec):   User: root
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec):   Password: false
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec):   Private key: true
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec):   Certificate: false
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec):   SSH Agent: false
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec):   Checking Host Key: false
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec):   Target Platform: unix
2024-06-19T16:53:41.030+0530 [DEBUG] Connecting to 145.40.91.141:22 for SSH
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
2024-06-19T16:53:41.148+0530 [DEBUG] Connection established. Handshaking for user root
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): Connected!
2024-06-19T16:53:42.020+0530 [DEBUG] starting ssh KeepAlives
2024-06-19T16:53:42.020+0530 [DEBUG] opening new ssh session
2024-06-19T16:53:42.257+0530 [DEBUG] Starting remote scp process:  'scp' -vt /root
2024-06-19T16:53:42.373+0530 [DEBUG] Started SCP session, beginning transfers...
2024-06-19T16:53:42.373+0530 [DEBUG] Beginning file upload...
2024-06-19T16:53:42.489+0530 [DEBUG] SCP session complete, closing stdin pipe.
2024-06-19T16:53:42.489+0530 [DEBUG] Waiting for SSH session to complete.
2024-06-19T16:53:42.605+0530 [ERROR] scp stderr: "Sink: C0644 42 finalize-cluster-210044106.sh\nscp: debug1: fd 0 clearing O_NONBLOCK\r\n"
2024-06-19T16:53:42.606+0530 [DEBUG] opening new ssh session
2024-06-19T16:53:42.841+0530 [DEBUG] starting remote command: chmod 0777 /root/finalize-cluster-210044106.sh
2024-06-19T16:53:42.967+0530 [DEBUG] remote command exited with '0': chmod 0777 /root/finalize-cluster-210044106.sh
2024-06-19T16:53:42.967+0530 [DEBUG] opening new ssh session
2024-06-19T16:53:43.201+0530 [DEBUG] starting remote command: /root/finalize-cluster-210044106.sh
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): Warning: Permanently added '192.168.102.188' (ECDSA) to the list of known hosts.
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): Nutanix Controller VM
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-06-19 11:23:45,841Z INFO MainThread cluster:2943 Executing action create on SVMs 192.168.102.188,192.168.103.42
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [8m10s elapsed]
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-06-19 11:23:48,871Z INFO MainThread cluster:1007 Discovered node:
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): ip: 192.168.102.188
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec):    rackable_unit_serial: FVWG2N3
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec):    node_position: A
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec):    node_uuid: 70ea097a-8851-4e17-8aef-99ed43770586

module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-06-19 11:23:48,872Z INFO MainThread cluster:1007 Discovered node:
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): ip: 192.168.103.42
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec):    rackable_unit_serial: 4WWG2N3
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec):    node_position: A
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec):    node_uuid: ca722d57-94b3-47ae-beda-384862718048

module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-06-19 11:23:48,872Z INFO MainThread cluster:1025 Cluster is on arch x86_64
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-06-19 11:23:48,872Z INFO MainThread genesis_utils.py:8077 Maximum node limit corresponding to the hypervisors on the cluster (set([u'kvm'])) : 32
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-06-19 11:23:48,874Z INFO MainThread genesis_rack_utils.py:50 Rack not configured on node (svm_ip: 192.168.102.188)
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-06-19 11:23:48,878Z INFO MainThread genesis_rack_utils.py:50 Rack not configured on node (svm_ip: 192.168.103.42)
module.nutanix_cluster2.null_resource.finalize_cluster[0]: Still creating... [10s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-06-19 11:23:51,703Z INFO MainThread cluster:1332 iptables configured on SVM 192.168.102.188
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-06-19 11:23:54,564Z INFO MainThread cluster:1332 iptables configured on SVM 192.168.103.42
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-06-19 11:23:54,566Z INFO MainThread cluster:1351 Creating certificates
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [8m20s elapsed]
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-06-19 11:23:58,028Z INFO MainThread cluster:1368 Setting the cluster functions on SVM node 192.168.102.188
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-06-19 11:23:58,028Z INFO MainThread cluster:1373 Configuring Zeus mapping ({u'192.168.102.188': 1, u'192.168.103.42': 2}) on SVM node 192.168.102.188
module.nutanix_cluster2.null_resource.finalize_cluster[0]: Still creating... [20s elapsed]
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-06-19 11:23:59,491Z INFO MainThread cluster:1368 Setting the cluster functions on SVM node 192.168.103.42
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-06-19 11:23:59,493Z INFO MainThread cluster:1373 Configuring Zeus mapping ({u'192.168.102.188': 1, u'192.168.103.42': 2}) on SVM node 192.168.103.42
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-06-19 11:24:00,654Z INFO MainThread cluster:1396 Creating cluster with SVMs: 192.168.102.188,192.168.103.42
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-06-19 11:24:00,710Z INFO MainThread cluster:1407 Will seed prism with password hash $6$JhZv2vMb$IEqPeO66qBOrU5L2FWSRJWsmMGifby7Bme6MF.yASiDp/OAXnuHVqoaUL1cJVGsOkEplFqPSAZ2KI3bpHlhrS0
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [8m30s elapsed]
module.nutanix_cluster2.null_resource.finalize_cluster[0]: Still creating... [30s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [8m40s elapsed]
module.nutanix_cluster2.null_resource.finalize_cluster[0]: Still creating... [40s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [8m50s elapsed]
module.nutanix_cluster2.null_resource.finalize_cluster[0]: Still creating... [50s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [9m0s elapsed]
module.nutanix_cluster2.null_resource.finalize_cluster[0]: Still creating... [1m0s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [9m10s elapsed]
module.nutanix_cluster2.null_resource.finalize_cluster[0]: Still creating... [1m10s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [9m20s elapsed]
module.nutanix_cluster2.null_resource.finalize_cluster[0]: Still creating... [1m20s elapsed]
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-06-19 11:24:59,518Z INFO MainThread cluster:1425 Zeus is not ready yet, trying again in 5 seconds
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [9m30s elapsed]
module.nutanix_cluster2.null_resource.finalize_cluster[0]: Still creating... [1m30s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [9m40s elapsed]
module.nutanix_cluster2.null_resource.finalize_cluster[0]: Still creating... [1m40s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [9m50s elapsed]
module.nutanix_cluster2.null_resource.finalize_cluster[0]: Still creating... [1m50s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [10m0s elapsed]
module.nutanix_cluster2.null_resource.finalize_cluster[0]: Still creating... [2m0s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-06-19 11:25:45,796Z CRITICAL MainThread cluster:1430 Cluster initialization on 192.168.102.188 failed with ret: RPCError: Client transport error: httplib receive exception: Traceback (most recent call last):
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec):   File "build/bdist.linux-x86_64/egg/util/net/http_rpc.py", line 178, in receive
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec):   File "/usr/lib64/python2.7/httplib.py", line 1144, in getresponse
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec):     response.begin()
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec):   File "/usr/lib64/python2.7/httplib.py", line 457, in begin
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec):     version, status, reason = self._read_status()
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec):   File "/usr/lib64/python2.7/httplib.py", line 421, in _read_status
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec):     raise BadStatusLine(line)
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): BadStatusLine: ''

module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): Connection to 192.168.102.188 closed.
2024-06-19T16:55:46.226+0530 [DEBUG] remote command exited with '1': /root/finalize-cluster-210044106.sh
2024-06-19T16:55:46.226+0530 [WARN]  Errors while provisioning module.nutanix_cluster2.null_resource.finalize_cluster[0] with "remote-exec", so aborting
2024-06-19T16:55:46.243+0530 [ERROR] vertex "module.nutanix_cluster2.null_resource.finalize_cluster[0]" error: remote-exec provisioner error
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [10m10s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [10m20s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [10m30s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [10m40s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [10m50s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [11m0s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [11m10s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [11m20s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [11m30s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [11m40s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [11m50s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [12m0s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [12m10s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [12m20s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [12m30s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [12m40s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [12m50s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [13m0s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [13m10s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [13m20s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [13m30s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [13m40s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [13m50s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [14m0s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [14m10s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [14m20s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [14m30s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [14m40s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [14m50s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [15m0s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [15m10s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [15m20s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [15m30s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [15m40s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [15m50s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [16m0s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [16m10s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [16m20s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [16m30s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [16m40s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [16m50s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [17m0s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [17m10s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [17m20s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [17m30s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [17m40s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [17m50s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [18m0s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [18m10s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [18m20s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [18m30s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [18m40s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [18m50s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [19m0s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [19m10s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [19m20s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [19m30s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [19m40s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [19m50s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1...
module.nutanix_cluster1.null_resource.wait_for_dhcp: Still creating... [20m0s elapsed]
module.nutanix_cluster1.null_resource.wait_for_dhcp (remote-exec): Timeout reached waiting for at least 2 leases in /var/lib/misc/dnsmasq.leases, found 1.
2024-06-19T17:05:41.392+0530 [DEBUG] remote command exited with '1': /tmp/terraform_1726584016.sh
2024-06-19T17:05:41.392+0530 [WARN]  Errors while provisioning module.nutanix_cluster1.null_resource.wait_for_dhcp with "remote-exec", so aborting
2024-06-19T17:05:41.407+0530 [ERROR] vertex "module.nutanix_cluster1.null_resource.wait_for_dhcp" error: remote-exec provisioner error
╷
│ Error: remote-exec provisioner error
│
│   with module.nutanix_cluster1.null_resource.wait_for_dhcp,
│   on .terraform/modules/nutanix_cluster1/main.tf line 210, in resource "null_resource" "wait_for_dhcp":
│  210:   provisioner "remote-exec" {
│
│ error executing "/tmp/terraform_1726584016.sh": Process exited with status 1
╵
╷
│ Error: remote-exec provisioner error
│
│   with module.nutanix_cluster2.null_resource.finalize_cluster[0],
│   on .terraform/modules/nutanix_cluster2/main.tf line 238, in resource "null_resource" "finalize_cluster":
│  238:   provisioner "remote-exec" {
│
│ error executing "/root/finalize-cluster-210044106.sh": Process exited with status 1
╵
2024-06-19T17:05:41.432+0530 [DEBUG] provider.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = error reading from server: EOF"
2024-06-19T17:05:41.432+0530 [DEBUG] provider.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = error reading from server: EOF"
2024-06-19T17:05:41.435+0530 [DEBUG] provider: plugin process exited: path=.terraform/providers/registry.terraform.io/hashicorp/local/2.5.1/darwin_arm64/terraform-provider-local_v2.5.1_x5 pid=66690
2024-06-19T17:05:41.435+0530 [DEBUG] provider: plugin exited
2024-06-19T17:05:41.435+0530 [DEBUG] provider: plugin process exited: path=.terraform/providers/registry.terraform.io/hashicorp/null/3.2.2/darwin_arm64/terraform-provider-null_v3.2.2_x5 pid=66687
2024-06-19T17:05:41.435+0530 [DEBUG] provider: plugin exited
displague commented 3 months ago

@codinja1188 is the code that ran into this error available somewhere? Can you push it to #71

I can't tell without seeing the code, but I would look out for overuse of var.cluster_subnet. If a VRF, defined as 192.168.96.0/21, is shared between two clusters, then the cluster_subnet sent to Cluster A would be 192.168.96.0/22 and cluster_subnet sent to Cluster B would be 192.168.100.0/22.

All subnet references within the module are derived from cluster_subnet.

displague commented 3 months ago

The problem could be with both clusters living in the same VLAN. This would mean that DHCP services from both bastion nodes can offer competing address space. (whoops)

We'll have to use separate VLANs and Gateways per cluster. We can still share one VRF across the two VLANs and they will have the Layer 3 routing we need.

graph TD
    Internet[Internet 🌐]

    A[Common VRF: 192.168.96.0/21]

    subgraph ClusterA["Cluster A"]
        direction TB
        A1[VLAN A]
        A2[VRF IP Reservation A<br>192.168.96.0/22]
        A3[Gateway A]
        A4[Bastion A<br>&lt;DHCP, NTP, NAT&gt;]
        A5[Nutanix Nodes A]
    end

    subgraph ClusterB["Cluster B"]
        direction TB
        B1[VLAN B]
        B2[VRF IP Reservation B<br>192.168.100.0/22]
        B3[Gateway B]
        B4[Bastion B<br>&lt;DHCP, NTP, NAT&gt;]
        B5[Nutanix Nodes B]
    end

    A -->|192.168.96.0/22| A1
    A1 --> A2
    A2 --> A3
    A3 --> A4
    A4 --> A5
    A -->|192.168.100.0/22| B1
    B1 --> B2
    B2 --> B3
    B3 --> B4
    B4 --> B5

    Internet --> A4
    Internet --> B4
codinja1188 commented 3 months ago

@displague ,

We'll have to use separate VLANs and Gateways per cluster. We can still share one VRF across the two VLANs and they will have the Layer 3 routing we need.

graph TD
    Internet[Internet 🌐]

    A[Common VRF: 192.168.96.0/21]

    subgraph ClusterA["Cluster A"]
        direction TB
        A1[VLAN A]
        A2[Gateway A]
        A3[IP Range: 192.168.96.0/22]
        A4[Bastion A<br>&lt;DHCP, NTP, NAT&gt;]
        A5[Nutanix Nodes A]
    end

    subgraph ClusterB["Cluster B"]
        direction TB
        B1[VLAN B]
        B2[Gateway B]
        B3[IP Range: 192.168.100.0/22]
        B4[Bastion B<br>&lt;DHCP, NTP, NAT&gt;]
        B5[Nutanix Nodes B]
    end

    A -->|192.168.96.0/22| A1
    A1 --> A2
    A2 --> A3
    A3 --> A4
    A4 --> A5
    A -->|192.168.100.0/22| B1
    B1 --> B2
    B2 --> B3
    B3 --> B4
    B4 --> B5

    Internet --> A4
    Internet --> B4

Looks like It's not working, even after common VRF seperate VLAN for clusters

image image image image
codinja1188 commented 3 months ago

Here are more details to understand the issue

image image

Terraform Outputs:

nutanix_cluster1_bastion_public_ip = "145.40.91.141"
nutanix_cluster1_cvim_ip_address = "192.168.98.16"
nutanix_cluster1_iscsi_data_services_ip = "192.168.99.253"
nutanix_cluster1_prism_central_ip_address = "192.168.99.252"
nutanix_cluster1_ssh_forward_command = "ssh -L 9440:192.168.98.16:9440 -L 19440:192.168.99.252:9440 -i /Users/vasubabu/Equinix/terraform-equinix-metal-nutanix-cluster/examples/nutanix-clusters/ssh-key-s6shy root@145.40.91.141"
nutanix_cluster1_ssh_private_key = "/Users/vasubabu/Equinix/terraform-equinix-metal-nutanix-cluster/examples/nutanix-clusters/ssh-key-s6shy"
nutanix_cluster1_virtual_ip_address = "192.168.99.254"

nutanix_cluster2_bastion_public_ip = "145.40.91.33"
nutanix_cluster2_cvim_ip_address = "192.168.100.153"
nutanix_cluster2_iscsi_data_services_ip = "192.168.103.253"
nutanix_cluster2_prism_central_ip_address = "192.168.103.252"
nutanix_cluster2_ssh_forward_command = "ssh -L 9440:192.168.100.153:9440 -L 19440:192.168.103.252:9440 -i /Users/vasubabu/Equinix/terraform-equinix-metal-nutanix-cluster/examples/nutanix-clusters/ssh-key-xdwdz root@145.40.91.33"
nutanix_cluster2_ssh_private_key = "/Users/vasubabu/Equinix/terraform-equinix-metal-nutanix-cluster/examples/nutanix-clusters/ssh-key-xdwdz"
nutanix_cluster2_virtual_ip_address = "192.168.103.254"

Cluster1 Bastion Network configurartion:

Vasubabus-MacBook-Pro:nutanix-clusters vasubabu$ ssh -L 9440:192.168.98.16:9440 -L 19440:192.168.99.252:9440 -i /Users/vasubabu/Equinix/terraform-equinix-metal-nutanix-cluster/examples/nutanix-clusters/ssh-key-s6shy root@145.40.91.141
Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 5.15.0-112-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/pro

 System information as of Fri Jun 21 15:05:09 UTC 2024

  System load:  0.0                Processes:              238
  Usage of /:   1.0% of 436.68GB   Users logged in:        0
  Memory usage: 5%                 IPv4 address for bond0: 145.40.91.141
  Swap usage:   0%                 IPv6 address for bond0: 2604:1380:11:d00::3
  Temperature:  48.0 C

 * Strictly confined Kubernetes makes edge and IoT secure. Learn how MicroK8s
   just raised the bar for easy, resilient and secure K8s cluster deployment.

   https://ubuntu.com/engage/secure-kubernetes-at-the-edge

Expanded Security Maintenance for Applications is not enabled.

0 updates can be applied immediately.

1 additional security update can be applied with ESM Apps.
Learn more about enabling ESM Apps service at https://ubuntu.com/esm

Last login: Fri Jun 21 11:22:41 2024 from 49.43.235.210
root@bastion:~# ip route
default via 145.40.91.140 dev bond0 onlink
10.0.0.0/8 via 10.9.24.6 dev bond0
10.9.24.6/31 dev bond0 proto kernel scope link src 10.9.24.7
145.40.91.140/31 dev bond0 proto kernel scope link src 145.40.91.141
192.168.96.0/22 dev bond0.1001 proto kernel scope link src 192.168.96.2
192.168.100.0/22 via 192.168.96.2 dev bond0.1001
root@bastion:~# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp1s0f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 50:7c:6f:13:ac:66 brd ff:ff:ff:ff:ff:ff
3: enp1s0f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 50:7c:6f:13:ac:66 brd ff:ff:ff:ff:ff:ff permaddr 50:7c:6f:13:ac:67
6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 50:7c:6f:13:ac:66 brd ff:ff:ff:ff:ff:ff
    inet 145.40.91.141/31 brd 255.255.255.255 scope global bond0
       valid_lft forever preferred_lft forever
    inet 10.9.24.7/31 brd 255.255.255.255 scope global bond0:0
       valid_lft forever preferred_lft forever
    inet6 2604:1380:11:d00::3/127 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::527c:6fff:fe13:ac66/64 scope link
       valid_lft forever preferred_lft forever
7: bond0.1001@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 50:7c:6f:13:ac:66 brd ff:ff:ff:ff:ff:ff
    inet 192.168.96.2/22 brd 192.168.99.255 scope global bond0.1001
       valid_lft forever preferred_lft forever
    inet6 fe80::527c:6fff:fe13:ac66/64 scope link
       valid_lft forever preferred_lft forever

root@bastion:~# ssh root@192.168.96.5
Nutanix AHV
root@192.168.96.5's password:
Last login: Fri Jun 21 11:22:51 UTC 2024 from 192.168.96.2 on pts/1
Last login: Fri Jun 21 11:22:51 2024 from 192.168.96.2

Nutanix AHV is a cluster-optimized hypervisor appliance.

Alteration of the hypervisor appliance (unless advised by Nutanix
Technical Support) is unsupported and may result in the hypervisor or
VMs functioning incorrectly.

Unsupported alterations include (but are not limited to):

- Configuration changes.
- Installation of third-party software not approved by Nutanix.
- Installation or upgrade of software packages from non-Nutanix
  sources (using yum, rpm, or similar).

[root@NTNX-5WWG2N3-A ~]# ip route
default via 192.168.96.2 dev br0
169.254.0.0/16 dev br0 scope link metric 1006
192.168.5.0/24 dev virbr0 proto kernel scope link src 192.168.5.1
192.168.96.0/22 dev br0 proto kernel scope link src 192.168.96.5
[root@NTNX-5WWG2N3-A ~]# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP group default qlen 1000
    link/ether b4:96:91:dc:b1:66 brd ff:ff:ff:ff:ff:ff
3: idrac: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether d0:8e:79:ce:b6:83 brd ff:ff:ff:ff:ff:ff
4: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP group default qlen 1000
    link/ether b4:96:91:dc:b1:67 brd ff:ff:ff:ff:ff:ff
5: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 92:eb:b0:a6:0d:5a brd ff:ff:ff:ff:ff:ff
6: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether b4:96:91:dc:b1:66 brd ff:ff:ff:ff:ff:ff
    inet 192.168.96.5/22 brd 192.168.99.255 scope global br0
       valid_lft forever preferred_lft forever
7: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 50:6b:8d:6e:b7:35 brd ff:ff:ff:ff:ff:ff
    inet 192.168.5.1/24 brd 192.168.5.255 scope global virbr0
       valid_lft forever preferred_lft forever
8: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
    link/ether 50:6b:8d:6e:b7:35 brd ff:ff:ff:ff:ff:ff
9: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UNKNOWN group default qlen 1000
    link/ether fe:6b:8d:1e:88:dc brd ff:ff:ff:ff:ff:ff
10: vnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master virbr0 state UNKNOWN group default qlen 1000
    link/ether fe:6b:8d:28:b6:0d brd ff:ff:ff:ff:ff:ff
11: vnet2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UNKNOWN group default qlen 1000
    link/ether fe:6b:8d:75:42:0f brd ff:ff:ff:ff:ff:ff
12: br.microseg: <BROADCAST,MULTICAST> mtu 65000 qdisc noop state DOWN group default qlen 1000
    link/ether 0a:36:c5:09:2c:4b brd ff:ff:ff:ff:ff:ff
13: br.mx: <BROADCAST,MULTICAST> mtu 65000 qdisc noop state DOWN group default qlen 1000
    link/ether 36:80:1c:ea:c6:47 brd ff:ff:ff:ff:ff:ff
14: br.dmx: <BROADCAST,MULTICAST> mtu 65000 qdisc noop state DOWN group default qlen 1000
    link/ether 42:7d:9d:24:26:46 brd ff:ff:ff:ff:ff:ff
15: br.nf: <BROADCAST,MULTICAST> mtu 65000 qdisc noop state DOWN group default qlen 1000
    link/ether 5e:8c:a8:b0:0d:41 brd ff:ff:ff:ff:ff:ff
16: vxlan_sys_4789: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65000 qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000
    link/ether 42:87:bc:c8:9e:c1 brd ff:ff:ff:ff:ff:ff
17: br0.local: <BROADCAST,MULTICAST> mtu 65000 qdisc noop state DOWN group default qlen 1000
    link/ether 2e:fe:f5:87:f1:4d brd ff:ff:ff:ff:ff:ff
18: brSpan: <BROADCAST,MULTICAST> mtu 65000 qdisc noop state DOWN group default qlen 1000
    link/ether ba:80:44:10:14:47 brd ff:ff:ff:ff:ff:ff
19: tap0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65000 qdisc mq master ovs-system state UP group default qlen 1000
    link/ether fe:6b:8d:d4:e1:fc brd ff:ff:ff:ff:ff:ff

Cluster1 Nutanix AHV Host:

[root@NTNX-5WWG2N3-A ~]# ssh admin@192.168.98.16
FIPS mode initialized
Nutanix Controller VM
admin@192.168.98.16's password:
Last login: Fri Jun 21 11:39:52 UTC 2024 from 192.168.96.5 on pts/0
Last login: Fri Jun 21 15:06:06 2024 from 192.168.96.5

Nutanix Controller VM (CVM) is a virtual storage appliance.

Alteration of the CVM (unless advised by Nutanix Technical Support or
Support Portal Documentation) is unsupported and may result in loss
of User VMs or other data residing on the cluster.

Unsupported alterations may include (but are not limited to):

- Configuration changes / removal of files.
- Installation of third-party software/scripts not approved by Nutanix.
- Installation or upgrade of software packages from non-Nutanix
  sources (using yum, rpm, or similar).

** SSH to CVM via 'nutanix' user will be restricted in coming releases.  **
** Please consider using the 'admin' user for basic workflows.           **
admin@NTNX-5WWG2N3-A-CVM:192.168.98.16:~$  ip route
default via 192.168.96.2 dev eth0
192.168.5.0/25 dev eth1 proto kernel scope link src 192.168.5.2
192.168.5.0/24 dev eth1 proto kernel scope link src 192.168.5.254
192.168.96.0/22 dev eth0 proto kernel scope link src 192.168.98.16
192.168.100.0/22 via 192.168.96.2 dev eth0

Cluster1 Nutanix CVIM Node:

admin@NTNX-5WWG2N3-A-CVM:192.168.98.16:~$ ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 50:6b:8d:1e:88:dc brd ff:ff:ff:ff:ff:ff
    inet 192.168.98.16/22 brd 192.168.99.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 192.168.99.254/32 brd 192.168.99.255 scope global eth0:1
       valid_lft forever preferred_lft forever
    inet 192.168.99.253/32 brd 192.168.99.255 scope global eth0:2
       valid_lft forever preferred_lft forever
    inet6 fe80::526b:8dff:fe1e:88dc/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 50:6b:8d:28:b6:0d brd ff:ff:ff:ff:ff:ff
    inet 192.168.5.2/25 brd 192.168.5.127 scope global eth1
       valid_lft forever preferred_lft forever
    inet 192.168.5.254/24 brd 192.168.5.255 scope global eth1:1
       valid_lft forever preferred_lft forever
    inet6 fe80::526b:8dff:fe28:b60d/64 scope link
       valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 50:6b:8d:75:42:0f brd ff:ff:ff:ff:ff:ff
admin@NTNX-5WWG2N3-A-CVM:192.168.98.16:~$

Cluster2 Bastion Network configurartion:

Vasubabus-MacBook-Pro:nutanix-clusters vasubabu$ ssh -L 9440:192.168.100.153:9440 -L 19440:192.168.103.252:9440 -i /Users/vasubabu/Equinix/terraform-equinix-metal-nutanix-cluster/examples/nutanix-clusters/ssh-key-xdwdz root@145.40.91.33
bind [::1]:9440: Address already in use
channel_setup_fwd_listener_tcpip: cannot listen to port: 9440
bind [::1]:19440: Address already in use
channel_setup_fwd_listener_tcpip: cannot listen to port: 19440
Could not request local forwarding.
Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 5.15.0-112-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/pro

 System information as of Fri Jun 21 15:11:51 UTC 2024

  System load:  0.0                Processes:              240
  Usage of /:   1.0% of 436.68GB   Users logged in:        0
  Memory usage: 5%                 IPv4 address for bond0: 145.40.91.33
  Swap usage:   0%                 IPv6 address for bond0: 2604:1380:11:d00::1
  Temperature:  50.0 C

 * Strictly confined Kubernetes makes edge and IoT secure. Learn how MicroK8s
   just raised the bar for easy, resilient and secure K8s cluster deployment.

   https://ubuntu.com/engage/secure-kubernetes-at-the-edge

Expanded Security Maintenance for Applications is not enabled.

0 updates can be applied immediately.

Enable ESM Apps to receive additional future security updates.
See https://ubuntu.com/esm or run: sudo pro status

Last login: Fri Jun 21 11:40:16 2024 from 49.43.235.210
root@bastion:~# ip route
default via 145.40.91.32 dev bond0 onlink
10.0.0.0/8 via 10.9.24.0 dev bond0
10.9.24.0/31 dev bond0 proto kernel scope link src 10.9.24.1
145.40.91.32/31 dev bond0 proto kernel scope link src 145.40.91.33
192.168.96.0/22 via 192.168.100.2 dev bond0.1000
192.168.100.0/22 dev bond0.1000 proto kernel scope link src 192.168.100.2
root@bastion:~# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp1s0f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 50:7c:6f:13:cb:50 brd ff:ff:ff:ff:ff:ff
3: enp1s0f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 50:7c:6f:13:cb:50 brd ff:ff:ff:ff:ff:ff permaddr 50:7c:6f:13:cb:51
6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 50:7c:6f:13:cb:50 brd ff:ff:ff:ff:ff:ff
    inet 145.40.91.33/31 brd 255.255.255.255 scope global bond0
       valid_lft forever preferred_lft forever
    inet 10.9.24.1/31 brd 255.255.255.255 scope global bond0:0
       valid_lft forever preferred_lft forever
    inet6 2604:1380:11:d00::1/127 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::527c:6fff:fe13:cb50/64 scope link
       valid_lft forever preferred_lft forever
7: bond0.1000@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 50:7c:6f:13:cb:50 brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.2/22 brd 192.168.103.255 scope global bond0.1000
       valid_lft forever preferred_lft forever
    inet6 fe80::527c:6fff:fe13:cb50/64 scope link
       valid_lft forever preferred_lft forever

Cluster2 Nutanix AHV Host:

root@bastion:~# ssh root@192.168.100.3
Nutanix AHV
root@192.168.100.3's password:
Last login: Fri Jun 21 11:40:20 UTC 2024 from 192.168.100.2 on pts/0
Last login: Fri Jun 21 11:40:20 2024 from 192.168.100.2

Nutanix AHV is a cluster-optimized hypervisor appliance.

Alteration of the hypervisor appliance (unless advised by Nutanix
Technical Support) is unsupported and may result in the hypervisor or
VMs functioning incorrectly.

Unsupported alterations include (but are not limited to):

- Configuration changes.
- Installation of third-party software not approved by Nutanix.
- Installation or upgrade of software packages from non-Nutanix
  sources (using yum, rpm, or similar).

[root@NTNX-3WWG2N3-A ~]# ip route
default via 192.168.100.2 dev br0
169.254.0.0/16 dev br0 scope link metric 1006
192.168.5.0/24 dev virbr0 proto kernel scope link src 192.168.5.1
[root@NTNX-3WWG2N3-A ~]# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP group default qlen 1000
    link/ether b4:96:91:dc:b0:f4 brd ff:ff:ff:ff:ff:ff
3: idrac: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether d0:8e:79:ce:ac:ab brd ff:ff:ff:ff:ff:ff
4: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP group default qlen 1000
    link/ether b4:96:91:dc:b0:f5 brd ff:ff:ff:ff:ff:ff
5: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether c6:69:69:f2:fc:c8 brd ff:ff:ff:ff:ff:ff
6: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether b4:96:91:dc:b0:f4 brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.3/22 brd 192.168.103.255 scope global br0
       valid_lft forever preferred_lft forever
7: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 50:6b:8d:a0:a7:8c brd ff:ff:ff:ff:ff:ff
    inet 192.168.5.1/24 brd 192.168.5.255 scope global virbr0
       valid_lft forever preferred_lft forever
8: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
    link/ether 50:6b:8d:a0:a7:8c brd ff:ff:ff:ff:ff:ff
9: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UNKNOWN group default qlen 1000
    link/ether fe:6b:8d:b1:79:57 brd ff:ff:ff:ff:ff:ff
10: vnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master virbr0 state UNKNOWN group default qlen 1000
    link/ether fe:6b:8d:29:db:16 brd ff:ff:ff:ff:ff:ff
11: vnet2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UNKNOWN group default qlen 1000
    link/ether fe:6b:8d:d5:fb:16 brd ff:ff:ff:ff:ff:ff
12: br.microseg: <BROADCAST,MULTICAST> mtu 65000 qdisc noop state DOWN group default qlen 1000
    link/ether 66:e2:df:29:a6:4b brd ff:ff:ff:ff:ff:ff
13: br.mx: <BROADCAST,MULTICAST> mtu 65000 qdisc noop state DOWN group default qlen 1000
    link/ether ea:f8:ab:86:fd:4f brd ff:ff:ff:ff:ff:ff
14: br.dmx: <BROADCAST,MULTICAST> mtu 65000 qdisc noop state DOWN group default qlen 1000
    link/ether 76:24:9f:51:d6:48 brd ff:ff:ff:ff:ff:ff
15: br.nf: <BROADCAST,MULTICAST> mtu 65000 qdisc noop state DOWN group default qlen 1000
    link/ether be:c7:d2:46:92:45 brd ff:ff:ff:ff:ff:ff
16: vxlan_sys_4789: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65000 qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000
    link/ether e6:38:78:ae:c9:c7 brd ff:ff:ff:ff:ff:ff
17: br0.local: <BROADCAST,MULTICAST> mtu 65000 qdisc noop state DOWN group default qlen 1000
    link/ether 9a:72:78:d5:22:4e brd ff:ff:ff:ff:ff:ff
18: brSpan: <BROADCAST,MULTICAST> mtu 65000 qdisc noop state DOWN group default qlen 1000
    link/ether aa:84:d8:7d:56:4a brd ff:ff:ff:ff:ff:ff

Cluster2 Nutanix CVIM Node:

[root@NTNX-3WWG2N3-A ~]# ssh admin@192.168.100.153
FIPS mode initialized
Nutanix Controller VM
admin@192.168.100.153's password:
Last login: Fri Jun 21 11:40:23 UTC 2024 from 192.168.100.3 on pts/0
Last login: Fri Jun 21 15:13:56 2024 from 192.168.100.3

Nutanix Controller VM (CVM) is a virtual storage appliance.

Alteration of the CVM (unless advised by Nutanix Technical Support or
Support Portal Documentation) is unsupported and may result in loss
of User VMs or other data residing on the cluster.

Unsupported alterations may include (but are not limited to):

- Configuration changes / removal of files.
- Installation of third-party software/scripts not approved by Nutanix.
- Installation or upgrade of software packages from non-Nutanix
  sources (using yum, rpm, or similar).

** SSH to CVM via 'nutanix' user will be restricted in coming releases.  **
** Please consider using the 'admin' user for basic workflows.           **
admin@NTNX-3WWG2N3-A-CVM:192.168.100.153:~$ ip route
default via 192.168.100.2 dev eth0
192.168.5.0/25 dev eth1 proto kernel scope link src 192.168.5.2
192.168.5.0/24 dev eth1 proto kernel scope link src 192.168.5.254
192.168.96.0/22 via 192.168.100.2 dev eth0
192.168.100.0/22 dev eth0 scope link
admin@NTNX-3WWG2N3-A-CVM:192.168.100.153:~$ ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 50:6b:8d:b1:79:57 brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.153/22 brd 192.168.103.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 192.168.103.254/32 brd 192.168.103.255 scope global eth0:1
       valid_lft forever preferred_lft forever
    inet 192.168.103.253/32 brd 192.168.103.255 scope global eth0:2
       valid_lft forever preferred_lft forever
    inet6 fe80::526b:8dff:feb1:7957/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 50:6b:8d:29:db:16 brd ff:ff:ff:ff:ff:ff
    inet 192.168.5.2/25 brd 192.168.5.127 scope global eth1
       valid_lft forever preferred_lft forever
    inet 192.168.5.254/24 brd 192.168.5.255 scope global eth1:1
       valid_lft forever preferred_lft forever
    inet6 fe80::526b:8dff:fe29:db16/64 scope link
       valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 50:6b:8d:d5:fb:16 brd ff:ff:ff:ff:ff:ff
codinja1188 commented 3 months ago

Here is the PR to create the common VRF B/w clusters https://github.com/equinix-labs/terraform-equinix-metal-nutanix-cluster/pull/79

displague commented 3 months ago

The problem we discussed is that the 192.168.96.0/21 network needs to be known in both Cluster A and Cluster B specifically so that the netmask is known as /21 in both clusters. For example, the bastion nodes should be 192.168.96.2/21 and 192.168.100.2/21. The DHCP advertisements should have gateway addresses of either .96.1 or .100.1 (either will work), ideally we use the one specific to the /22 range for each cluster. The DHCP range for each cluster should be limitted to addresses within the /22 for that cluster, but we need to be carefull that the subnet is /21.

codinja1188 commented 3 months ago

et is /21. @displague ,

As you suggested used common gateway(192.168.96.1) b/w cluster A and Cluster B

Cluster A(Bastion )

root@bastion:~# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp1s0f0np0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 10:70:fd:2c:a7:90 brd ff:ff:ff:ff:ff:ff
3: enp1s0f1np1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 10:70:fd:2c:a7:90 brd ff:ff:ff:ff:ff:ff permaddr 10:70:fd:2c:a7:91
4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 10:70:fd:2c:a7:90 brd ff:ff:ff:ff:ff:ff
    inet 145.40.91.141/31 brd 255.255.255.255 scope global bond0
       valid_lft forever preferred_lft forever
    inet 10.9.24.3/31 brd 255.255.255.255 scope global bond0:0
       valid_lft forever preferred_lft forever
    inet6 2604:1380:11:d00::3/127 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::1270:fdff:fe2c:a790/64 scope link
       valid_lft forever preferred_lft forever
5: bond0.1001@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 10:70:fd:2c:a7:90 brd ff:ff:ff:ff:ff:ff
    inet 192.168.96.2/21 brd 192.168.103.255 scope global bond0.1001
       valid_lft forever preferred_lft forever
    inet6 fe80::1270:fdff:fe2c:a790/64 scope link
       valid_lft forever preferred_lft forever
root@bastion:~# ip route
default via 145.40.91.140 dev bond0 onlink
10.0.0.0/8 via 10.9.24.2 dev bond0
10.9.24.2/31 dev bond0 proto kernel scope link src 10.9.24.3
145.40.91.140/31 dev bond0 proto kernel scope link src 145.40.91.141
192.168.96.0/21 dev bond0.1001 proto kernel scope link src 192.168.96.2
root@bastion:~# cat /etc/dnsmasq.d/nutanix.config
bind-interfaces
interface=bond0.1001
dhcp-range=192.168.96.3,192.168.96.15,infinite
dhcp-mac=set:nutanix,50:6b:8d:*:*:*
dhcp-range=tag:nutanix,192.168.96.16,192.168.99.251,infinite
dhcp-option=option:netmask,255.255.248.0
dhcp-option=option:router,192.168.96.1
root@bastion:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback

auto enp1s0f0np0
iface enp1s0f0np0 inet manual
    bond-master bond0

auto enp1s0f1np1
iface enp1s0f1np1 inet manual
    pre-up sleep 4
    bond-master bond0

auto bond0
iface bond0 inet static
    address 145.40.91.141
    netmask 255.255.255.254
    gateway 145.40.91.140
    hwaddress 10:70:fd:2c:a7:90
    dns-nameservers 147.75.207.207 147.75.207.208

    bond-downdelay 200
    bond-miimon 100
    bond-mode 4
    bond-updelay 200
    bond-xmit_hash_policy layer3+4
    bond-lacp-rate 1
    bond-slaves enp1s0f0np0 enp1s0f1np1

iface bond0 inet6 static
    address 2604:1380:11:d00::3
    netmask 127
    gateway 2604:1380:11:d00::2

auto bond0:0
iface bond0:0 inet static
    address 10.9.24.3
    netmask 255.255.255.254
    post-up route add -net 10.0.0.0/8 gw 10.9.24.2
    post-down route del -net 10.0.0.0/8 gw 10.9.24.2

auto bond0.1001
iface bond0.1001 inet static
    pre-up sleep 5
    address 192.168.96.2
    netmask 255.255.248.0
    gateway 192.168.96.1
    vlan-raw-device bond0

Cluster A(Nutanix AHV Host )

[root@NTNX-3WWG2N3-A ~]# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP group default qlen 1000
    link/ether b4:96:91:dc:b0:f4 brd ff:ff:ff:ff:ff:ff
3: idrac: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether d0:8e:79:ce:ac:ab brd ff:ff:ff:ff:ff:ff
4: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP group default qlen 1000
    link/ether b4:96:91:dc:b0:f5 brd ff:ff:ff:ff:ff:ff
5: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 3e:d4:47:e0:ae:ca brd ff:ff:ff:ff:ff:ff
6: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether b4:96:91:dc:b0:f4 brd ff:ff:ff:ff:ff:ff
    inet 192.168.96.3/21 brd 192.168.103.255 scope global br0
       valid_lft forever preferred_lft forever
7: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 50:6b:8d:fe:78:1d brd ff:ff:ff:ff:ff:ff
    inet 192.168.5.1/24 brd 192.168.5.255 scope global virbr0
       valid_lft forever preferred_lft forever
8: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
    link/ether 50:6b:8d:fe:78:1d brd ff:ff:ff:ff:ff:ff
9: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UNKNOWN group default qlen 1000
    link/ether fe:6b:8d:f5:00:3c brd ff:ff:ff:ff:ff:ff
10: vnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master virbr0 state UNKNOWN group default qlen 1000
    link/ether fe:6b:8d:2d:2c:f6 brd ff:ff:ff:ff:ff:ff
11: vnet2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UNKNOWN group default qlen 1000
    link/ether fe:6b:8d:24:47:05 brd ff:ff:ff:ff:ff:ff
12: br.microseg: <BROADCAST,MULTICAST> mtu 65000 qdisc noop state DOWN group default qlen 1000
    link/ether 9a:11:e3:c7:fb:40 brd ff:ff:ff:ff:ff:ff
13: br.mx: <BROADCAST,MULTICAST> mtu 65000 qdisc noop state DOWN group default qlen 1000
    link/ether 46:00:13:f9:6d:46 brd ff:ff:ff:ff:ff:ff
14: br.dmx: <BROADCAST,MULTICAST> mtu 65000 qdisc noop state DOWN group default qlen 1000
    link/ether 86:26:45:20:1e:47 brd ff:ff:ff:ff:ff:ff
15: br.nf: <BROADCAST,MULTICAST> mtu 65000 qdisc noop state DOWN group default qlen 1000
    link/ether 2e:42:e4:25:97:41 brd ff:ff:ff:ff:ff:ff
16: vxlan_sys_4789: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65000 qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000
    link/ether 2a:df:cb:33:48:77 brd ff:ff:ff:ff:ff:ff
17: br0.local: <BROADCAST,MULTICAST> mtu 65000 qdisc noop state DOWN group default qlen 1000
    link/ether 56:39:96:18:5d:49 brd ff:ff:ff:ff:ff:ff
18: brSpan: <BROADCAST,MULTICAST> mtu 65000 qdisc noop state DOWN group default qlen 1000
    link/ether 4a:b6:04:70:78:4d brd ff:ff:ff:ff:ff:ff
[root@NTNX-3WWG2N3-A ~]# ip route
default via 192.168.96.1 dev br0
169.254.0.0/16 dev br0 scope link metric 1006
192.168.5.0/24 dev virbr0 proto kernel scope link src 192.168.5.1
192.168.96.0/21 dev br0 proto kernel scope link src 192.168.96.3
[root@NTNX-3WWG2N3-A ~]# cat /etc/networks
default 0.0.0.0
loopback 127.0.0.0
link-local 169.254.0.0
[root@NTNX-3WWG2N3-A ~]# iptables -nvL
Chain INPUT (policy DROP 6 packets, 312 bytes)
 pkts bytes target     prot opt in     out     source               destination
57320   54M nutanix-CCLM-INPUT  all  --  *      *       0.0.0.0/0            0.0.0.0/0
57320   54M nutanix-INPUT  all  --  *      *       0.0.0.0/0            0.0.0.0/0
57320   54M nutanix-ovs-INPUT  all  --  *      *       0.0.0.0/0            0.0.0.0/0
56850   54M ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            state RELATED,ESTABLISHED
  282  330K ACCEPT     icmp --  *      *       0.0.0.0/0            0.0.0.0/0
   33  1980 ACCEPT     all  --  lo     *       0.0.0.0/0            0.0.0.0/0
  147  7676 ACCEPT     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:22
    0     0 ACCEPT     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:7030

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 ACCEPT     all  --  *      *       192.168.100.0/22     192.168.96.0/21
    0     0 ACCEPT     all  --  *      *       192.168.96.0/21      192.168.100.0/22
    0     0 REJECT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            reject-with icmp-host-prohibited
    0     0 ACCEPT     all  --  *      *       192.168.96.0/21      192.168.100.0/22
    0     0 ACCEPT     all  --  *      *       192.168.100.0/22     192.168.96.0/21

Chain OUTPUT (policy ACCEPT 22443 packets, 8872K bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain nutanix-CCLM-INPUT (1 references)
 pkts bytes target     prot opt in     out     source               destination

Chain nutanix-INPUT (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 ACCEPT     tcp  --  *      *       192.168.96.0/21      0.0.0.0/0            tcp dpts:49152:49215
    0     0 ACCEPT     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:6653
    0     0 ACCEPT     udp  --  *      *       0.0.0.0/0            0.0.0.0/0            udp dpt:6081
    0     0 ACCEPT     47   --  *      *       0.0.0.0/0            0.0.0.0/0

Chain nutanix-ovs-INPUT (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 ACCEPT     udp  --  *      *       192.168.5.2          0.0.0.0/0            udp dpt:4789
    0     0 ACCEPT     udp  --  *      *       192.168.97.179       0.0.0.0/0            udp dpt:4789

Cluster A (CVM Host)

admin@NTNX-3WWG2N3-A-CVM:192.168.97.179:~$ ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 50:6b:8d:f5:00:3c brd ff:ff:ff:ff:ff:ff
    inet 192.168.97.179/21 brd 192.168.103.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::526b:8dff:fef5:3c/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 50:6b:8d:2d:2c:f6 brd ff:ff:ff:ff:ff:ff
    inet 192.168.5.2/25 brd 192.168.5.127 scope global eth1
       valid_lft forever preferred_lft forever
    inet 192.168.5.254/24 brd 192.168.5.255 scope global eth1:1
       valid_lft forever preferred_lft forever
    inet6 fe80::526b:8dff:fe2d:2cf6/64 scope link
       valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 50:6b:8d:24:47:05 brd ff:ff:ff:ff:ff:ff
admin@NTNX-3WWG2N3-A-CVM:192.168.97.179:~$ ip route
default via 192.168.96.1 dev eth0
192.168.5.0/25 dev eth1 proto kernel scope link src 192.168.5.2
192.168.5.0/24 dev eth1 proto kernel scope link src 192.168.5.254
192.168.96.0/21 dev eth0 proto kernel scope link src 192.168.97.179

Observation on Cluster B

root@bastion:~# ping 192.168.96.1.                    <---- Bastion
PING 192.168.96.1 (192.168.96.1) 56(84) bytes of data.
64 bytes from 192.168.96.1: icmp_seq=1 ttl=64 time=0.310 ms
64 bytes from 192.168.96.1: icmp_seq=2 ttl=64 time=0.205 ms
^C
--- 192.168.96.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1009ms
rtt min/avg/max/mdev = 0.205/0.257/0.310/0.052 ms
root@bastion:~#
root@bastion:~#
root@bastion:~# ssh root@192.168.96.3
Nutanix AHV
root@192.168.96.3's password:

[root@NTNX-3WWG2N3-A ~]# ping 192.168.96.1              <-------- Nutanix Host
PING 192.168.96.1 (192.168.96.1) 56(84) bytes of data.
64 bytes from 192.168.96.1: icmp_seq=1 ttl=64 time=0.376 ms
64 bytes from 192.168.96.1: icmp_seq=2 ttl=64 time=0.146 ms
^C
--- 192.168.96.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 0.146/0.261/0.376/0.115 ms
[root@NTNX-3WWG2N3-A ~]#
[root@NTNX-3WWG2N3-A ~]#
[root@NTNX-3WWG2N3-A ~]#
[root@NTNX-3WWG2N3-A ~]# ssh admin@192.168.97.179          <------- CVM Host Controller
admin@NTNX-3WWG2N3-A-CVM:192.168.97.179:~$ ping 192.168.96.1
PING 192.168.96.1 (192.168.96.1) 56(84) bytes of data.
64 bytes from 192.168.96.1: icmp_seq=1 ttl=64 time=0.187 ms
64 bytes from 192.168.96.1: icmp_seq=2 ttl=64 time=0.208 ms
64 bytes from 192.168.96.1: icmp_seq=3 ttl=64 time=0.266 ms
^C
--- 192.168.96.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.187/0.220/0.266/0.035 ms
admin@NTNX-3WWG2N3-A-CVM:192.168.97.179:~$

Cluster B (Bastion)

root@bastion:~# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp1s0f0np0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 50:7c:6f:13:e9:5a brd ff:ff:ff:ff:ff:ff
    altname enp1s0f0
3: enp1s0f1np1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 50:7c:6f:13:e9:5a brd ff:ff:ff:ff:ff:ff permaddr 50:7c:6f:13:e9:5b
    altname enp1s0f1
4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 50:7c:6f:13:e9:5a brd ff:ff:ff:ff:ff:ff
    inet 145.40.91.33/31 brd 255.255.255.255 scope global bond0
       valid_lft forever preferred_lft forever
    inet 10.9.24.7/31 brd 255.255.255.255 scope global bond0:0
       valid_lft forever preferred_lft forever
    inet6 2604:1380:11:d00::1/127 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::527c:6fff:fe13:e95a/64 scope link
       valid_lft forever preferred_lft forever
5: bond0.1000@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 50:7c:6f:13:e9:5a brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.2/21 brd 192.168.103.255 scope global bond0.1000
       valid_lft forever preferred_lft forever
    inet6 fe80::527c:6fff:fe13:e95a/64 scope link
       valid_lft forever preferred_lft forever
root@bastion:~# ip route
default via 145.40.91.32 dev bond0 onlink
10.0.0.0/8 via 10.9.24.6 dev bond0
10.9.24.6/31 dev bond0 proto kernel scope link src 10.9.24.7
145.40.91.32/31 dev bond0 proto kernel scope link src 145.40.91.33
192.168.96.0/21 dev bond0.1000 proto kernel scope link src 192.168.100.2
root@bastion:~# cat /etc/dnsmasq.d/nutanix.config
bind-interfaces
interface=bond0.1000
dhcp-range=192.168.100.3,192.168.100.15,infinite
dhcp-mac=set:nutanix,50:6b:8d:*:*:*
dhcp-range=tag:nutanix,192.168.100.16,192.168.103.251,infinite
dhcp-option=option:netmask,255.255.248.0
dhcp-option=option:router,192.168.96.1

Cluster B (Nutanix Host)

[root@NTNX-FVWG2N3-A ~]# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP group default qlen 1000
    link/ether b4:96:91:dc:b0:32 brd ff:ff:ff:ff:ff:ff
3: idrac: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether d0:8e:79:ce:e2:03 brd ff:ff:ff:ff:ff:ff
4: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP group default qlen 1000
    link/ether b4:96:91:dc:b0:33 brd ff:ff:ff:ff:ff:ff
5: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 0a:ad:0c:1b:c6:ae brd ff:ff:ff:ff:ff:ff
6: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether b4:96:91:dc:b0:32 brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.4/21 brd 192.168.103.255 scope global br0
       valid_lft forever preferred_lft forever
7: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 50:6b:8d:4f:50:ff brd ff:ff:ff:ff:ff:ff
    inet 192.168.5.1/24 brd 192.168.5.255 scope global virbr0
       valid_lft forever preferred_lft forever
8: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
    link/ether 50:6b:8d:4f:50:ff brd ff:ff:ff:ff:ff:ff
9: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UNKNOWN group default qlen 1000
    link/ether fe:6b:8d:5a:bf:d7 brd ff:ff:ff:ff:ff:ff
10: vnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master virbr0 state UNKNOWN group default qlen 1000
    link/ether fe:6b:8d:c4:e3:95 brd ff:ff:ff:ff:ff:ff
11: vnet2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UNKNOWN group default qlen 1000
    link/ether fe:6b:8d:10:bd:9f brd ff:ff:ff:ff:ff:ff
12: br.microseg: <BROADCAST,MULTICAST> mtu 65000 qdisc noop state DOWN group default qlen 1000
    link/ether ee:2f:46:a0:83:41 brd ff:ff:ff:ff:ff:ff
13: br.mx: <BROADCAST,MULTICAST> mtu 65000 qdisc noop state DOWN group default qlen 1000
    link/ether da:81:9a:ee:aa:45 brd ff:ff:ff:ff:ff:ff
14: br.dmx: <BROADCAST,MULTICAST> mtu 65000 qdisc noop state DOWN group default qlen 1000
    link/ether 1e:82:6a:7e:3e:47 brd ff:ff:ff:ff:ff:ff
15: br.nf: <BROADCAST,MULTICAST> mtu 65000 qdisc noop state DOWN group default qlen 1000
    link/ether 5a:1c:5b:a9:09:4c brd ff:ff:ff:ff:ff:ff
16: vxlan_sys_4789: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65000 qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000
    link/ether 02:07:01:ab:96:dd brd ff:ff:ff:ff:ff:ff
17: br0.local: <BROADCAST,MULTICAST> mtu 65000 qdisc noop state DOWN group default qlen 1000
    link/ether 46:77:6d:0d:5f:4e brd ff:ff:ff:ff:ff:ff
18: brSpan: <BROADCAST,MULTICAST> mtu 65000 qdisc noop state DOWN group default qlen 1000
    link/ether ee:7a:65:eb:ed:4d brd ff:ff:ff:ff:ff:ff
[root@NTNX-FVWG2N3-A ~]# ip route
default via 192.168.96.1 dev br0
169.254.0.0/16 dev br0 scope link metric 1006
192.168.5.0/24 dev virbr0 proto kernel scope link src 192.168.5.1
192.168.96.0/21 dev br0 proto kernel scope link src 192.168.100.4
[root@NTNX-FVWG2N3-A ~]# iptables -nvL
Chain INPUT (policy DROP 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
59245   55M nutanix-CCLM-INPUT  all  --  *      *       0.0.0.0/0            0.0.0.0/0
59245   55M nutanix-INPUT  all  --  *      *       0.0.0.0/0            0.0.0.0/0
59245   55M nutanix-ovs-INPUT  all  --  *      *       0.0.0.0/0            0.0.0.0/0
58745   55M ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            state RELATED,ESTABLISHED
  305  358K ACCEPT     icmp --  *      *       0.0.0.0/0            0.0.0.0/0
   37  2220 ACCEPT     all  --  lo     *       0.0.0.0/0            0.0.0.0/0
  150  7816 ACCEPT     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:22
    0     0 ACCEPT     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:7030

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 ACCEPT     all  --  *      *       192.168.100.0/22     192.168.96.0/21
    0     0 ACCEPT     all  --  *      *       192.168.96.0/21      192.168.100.0/22
    0     0 REJECT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            reject-with icmp-host-prohibited

Chain OUTPUT (policy ACCEPT 607 packets, 270K bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain nutanix-CCLM-INPUT (1 references)
 pkts bytes target     prot opt in     out     source               destination

Chain nutanix-INPUT (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 ACCEPT     tcp  --  *      *       192.168.96.0/21      0.0.0.0/0            tcp dpts:49152:49215
    0     0 ACCEPT     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:6653
    0     0 ACCEPT     udp  --  *      *       0.0.0.0/0            0.0.0.0/0            udp dpt:6081
    0     0 ACCEPT     47   --  *      *       0.0.0.0/0            0.0.0.0/0

Chain nutanix-ovs-INPUT (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 ACCEPT     udp  --  *      *       192.168.5.2          0.0.0.0/0            udp dpt:4789
    0     0 ACCEPT     udp  --  *      *       192.168.102.20       0.0.0.0/0            udp dpt:4789

Cluster B(CVM Host)

admin@NTNX-FVWG2N3-A-CVM:192.168.102.20:~$ ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 50:6b:8d:5a:bf:d7 brd ff:ff:ff:ff:ff:ff
    inet 192.168.102.20/21 brd 192.168.103.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::526b:8dff:fe5a:bfd7/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 50:6b:8d:c4:e3:95 brd ff:ff:ff:ff:ff:ff
    inet 192.168.5.2/25 brd 192.168.5.127 scope global eth1
       valid_lft forever preferred_lft forever
    inet 192.168.5.254/24 brd 192.168.5.255 scope global eth1:1
       valid_lft forever preferred_lft forever
    inet6 fe80::526b:8dff:fec4:e395/64 scope link
       valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 50:6b:8d:10:bd:9f brd ff:ff:ff:ff:ff:ff
admin@NTNX-FVWG2N3-A-CVM:192.168.102.20:~$ ip route
default via 192.168.96.1 dev eth0
192.168.5.0/25 dev eth1 proto kernel scope link src 192.168.5.2
192.168.5.0/24 dev eth1 proto kernel scope link src 192.168.5.254
192.168.96.0/21 dev eth0 proto kernel scope link src 192.168.102.20

Unfortunately the common gateway IP (192.168.96.1) is not pingable in Cluster B

Help me here.

codinja1188 commented 3 months ago

@displague ,

Here are my changes I applied

https://github.com/equinix-labs/terraform-equinix-metal-nutanix-cluster/pull/82

displague commented 2 months ago

My run-through (terraform apply) of this example, from a Mac, results in:

module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-08-13 14:35:22,724Z INFO MainThread cluster:1425 Zeus is not ready yet, trying again in 5 seconds
module.nutanix_cluster2.null_resource.finalize_cluster[0]: Still creating... [1m30s elapsed]
module.nutanix_cluster1.null_resource.finalize_cluster[0]: Still creating... [1m0s elapsed]
module.nutanix_cluster2.null_resource.finalize_cluster[0]: Still creating... [1m40s elapsed]
module.nutanix_cluster1.null_resource.finalize_cluster[0]: Still creating... [1m10s elapsed]
module.nutanix_cluster2.null_resource.finalize_cluster[0]: Still creating... [1m50s elapsed]
module.nutanix_cluster1.null_resource.finalize_cluster[0]: Still creating... [1m20s elapsed]
module.nutanix_cluster1.null_resource.finalize_cluster[0] (remote-exec): 2024-08-13 14:36:24,285Z INFO MainThread cluster:1425 Zeus is not ready yet, trying again in 5 seconds
module.nutanix_cluster2.null_resource.finalize_cluster[0]: Still creating... [2m0s elapsed]
module.nutanix_cluster1.null_resource.finalize_cluster[0]: Still creating... [1m30s elapsed]
module.nutanix_cluster2.null_resource.finalize_cluster[0]: Still creating... [2m10s elapsed]
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-08-13 14:36:09,286Z CRITICAL MainThread cluster:1430 Cluster initialization on 192.168.103.237 failed with ret: RPCError: Client transport error: httplib receive exception: Traceback (most recent call last):
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec):   File "build/bdist.linux-x86_64/egg/util/net/http_rpc.py", line 178, in receive
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec):   File "/usr/lib64/python2.7/httplib.py", line 1144, in getresponse
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec):     response.begin()
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec):   File "/usr/lib64/python2.7/httplib.py", line 457, in begin
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec):     version, status, reason = self._read_status()
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec):   File "/usr/lib64/python2.7/httplib.py", line 421, in _read_status
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec):     raise BadStatusLine(line)
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): BadStatusLine: ''

module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): Connection to 192.168.103.237 closed.
module.nutanix_cluster1.null_resource.finalize_cluster[0]: Still creating... [1m40s elapsed]
module.nutanix_cluster1.null_resource.finalize_cluster[0]: Still creating... [1m50s elapsed]
module.nutanix_cluster1.null_resource.finalize_cluster[0]: Still creating... [2m0s elapsed]
module.nutanix_cluster1.null_resource.finalize_cluster[0]: Still creating... [2m10s elapsed]
module.nutanix_cluster1.null_resource.finalize_cluster[0] (remote-exec): 2024-08-13 14:37:10,979Z CRITICAL MainThread cluster:1430 Cluster initialization on 192.168.98.15 failed with ret: RPCError: Client transport error: httplib receive exception: Traceback (most recent call last):
module.nutanix_cluster1.null_resource.finalize_cluster[0] (remote-exec):   File "build/bdist.linux-x86_64/egg/util/net/http_rpc.py", line 178, in receive
module.nutanix_cluster1.null_resource.finalize_cluster[0] (remote-exec):   File "/usr/lib64/python2.7/httplib.py", line 1144, in getresponse
module.nutanix_cluster1.null_resource.finalize_cluster[0] (remote-exec):     response.begin()
module.nutanix_cluster1.null_resource.finalize_cluster[0] (remote-exec):   File "/usr/lib64/python2.7/httplib.py", line 457, in begin
module.nutanix_cluster1.null_resource.finalize_cluster[0] (remote-exec):     version, status, reason = self._read_status()
module.nutanix_cluster1.null_resource.finalize_cluster[0] (remote-exec):   File "/usr/lib64/python2.7/httplib.py", line 421, in _read_status
module.nutanix_cluster1.null_resource.finalize_cluster[0] (remote-exec):     raise BadStatusLine(line)
module.nutanix_cluster1.null_resource.finalize_cluster[0] (remote-exec): BadStatusLine: ''

module.nutanix_cluster1.null_resource.finalize_cluster[0] (remote-exec): Connection to 192.168.98.15 closed.

On a subsequent terraform apply:

module.nutanix_cluster1.null_resource.finalize_cluster[0]: Destroying... [id=7557239657341802849]
module.nutanix_cluster2.null_resource.finalize_cluster[0]: Destroying... [id=8674855970862894995]
module.nutanix_cluster1.null_resource.finalize_cluster[0]: Destruction complete after 0s
module.nutanix_cluster2.null_resource.finalize_cluster[0]: Destruction complete after 0s
module.nutanix_cluster2.null_resource.finalize_cluster[0]: Creating...
module.nutanix_cluster1.null_resource.finalize_cluster[0]: Creating...
module.nutanix_cluster1.null_resource.finalize_cluster[0]: Provisioning with 'file'...
module.nutanix_cluster2.null_resource.finalize_cluster[0]: Provisioning with 'file'...
module.nutanix_cluster2.null_resource.finalize_cluster[0]: Provisioning with 'remote-exec'...
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): Connecting to remote host via SSH...
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec):   Host: 145.40.91.109
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec):   User: root
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec):   Password: false
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec):   Private key: true
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec):   Certificate: false
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec):   SSH Agent: true
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec):   Checking Host Key: false
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec):   Target Platform: unix
module.nutanix_cluster1.null_resource.finalize_cluster[0]: Provisioning with 'remote-exec'...
module.nutanix_cluster1.null_resource.finalize_cluster[0] (remote-exec): Connecting to remote host via SSH...
module.nutanix_cluster1.null_resource.finalize_cluster[0] (remote-exec):   Host: 145.40.91.33
module.nutanix_cluster1.null_resource.finalize_cluster[0] (remote-exec):   User: root
module.nutanix_cluster1.null_resource.finalize_cluster[0] (remote-exec):   Password: false
module.nutanix_cluster1.null_resource.finalize_cluster[0] (remote-exec):   Private key: true
module.nutanix_cluster1.null_resource.finalize_cluster[0] (remote-exec):   Certificate: false
module.nutanix_cluster1.null_resource.finalize_cluster[0] (remote-exec):   SSH Agent: true
module.nutanix_cluster1.null_resource.finalize_cluster[0] (remote-exec):   Checking Host Key: false
module.nutanix_cluster1.null_resource.finalize_cluster[0] (remote-exec):   Target Platform: unix
module.nutanix_cluster1.null_resource.finalize_cluster[0] (remote-exec): Connected!
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): Connected!
module.nutanix_cluster1.null_resource.finalize_cluster[0] (remote-exec): Nutanix Controller VM
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): Nutanix Controller VM
module.nutanix_cluster1.null_resource.finalize_cluster[0]: Still creating... [10s elapsed]
module.nutanix_cluster2.null_resource.finalize_cluster[0]: Still creating... [10s elapsed]
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-08-13 20:58:24,370Z INFO MainThread cluster:2943 Executing action create on SVMs 192.168.103.237,192.168.103.117
module.nutanix_cluster1.null_resource.finalize_cluster[0] (remote-exec): 2024-08-13 20:58:53,352Z INFO MainThread cluster:2943 Executing action create on SVMs 192.168.98.15,192.168.98.108
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-08-13 20:58:24,377Z WARNING MainThread genesis_utils.py:1580 Failed to reach a node where Genesis is up. Ensure Genesis is running on all CVMs. Retrying...(Hit Ctrl-C to abort)
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-08-13 20:58:25,379Z WARNING MainThread genesis_utils.py:1580 Failed to reach a node where Genesis is up. Ensure Genesis is running on all CVMs. Retrying...(Hit Ctrl-C to abort)
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-08-13 20:58:26,382Z WARNING MainThread genesis_utils.py:1580 Failed to reach a node where Genesis is up. Ensure Genesis is running on all CVMs. Retrying...(Hit Ctrl-C to abort)
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-08-13 20:58:27,385Z WARNING MainThread genesis_utils.py:1580 Failed to reach a node where Genesis is up. Ensure Genesis is running on all CVMs. Retrying...(Hit Ctrl-C to abort)
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-08-13 20:58:28,394Z WARNING MainThread genesis_utils.py:1580 Failed to reach a node where Genesis is up. Ensure Genesis is running on all CVMs. Retrying...(Hit Ctrl-C to abort)
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-08-13 20:58:29,396Z WARNING MainThread genesis_utils.py:1580 Failed to reach a node where Genesis is up. Ensure Genesis is running on all CVMs. Retrying...(Hit Ctrl-C to abort)
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-08-13 20:58:30,399Z WARNING MainThread genesis_utils.py:1580 Failed to reach a node where Genesis is up. Ensure Genesis is running on all CVMs. Retrying...(Hit Ctrl-C to abort)
module.nutanix_cluster1.null_resource.finalize_cluster[0] (remote-exec): 2024-08-13 20:58:59,418Z CRITICAL MainThread cluster:1001 Could not discover all nodes specified. Please make sure that the SVMs from which you wish to create the cluster are not already part of another cluster. Undiscovered ips : 192.168.98.15,192.168.98.108
module.nutanix_cluster1.null_resource.finalize_cluster[0] (remote-exec): Connection to 192.168.98.15 closed.
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-08-13 20:58:31,402Z WARNING MainThread genesis_utils.py:1580 Failed to reach a node where Genesis is up. Ensure Genesis is running on all CVMs. Retrying...(Hit Ctrl-C to abort)
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-08-13 20:58:32,405Z WARNING MainThread genesis_utils.py:1580 Failed to reach a node where Genesis is up. Ensure Genesis is running on all CVMs. Retrying...(Hit Ctrl-C to abort)
module.nutanix_cluster2.null_resource.finalize_cluster[0]: Still creating... [20s elapsed]
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-08-13 20:58:33,415Z WARNING MainThread genesis_utils.py:1580 Failed to reach a node where Genesis is up. Ensure Genesis is running on all CVMs. Retrying...(Hit Ctrl-C to abort)
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-08-13 20:58:34,418Z WARNING MainThread genesis_utils.py:1580 Failed to reach a node where Genesis is up. Ensure Genesis is running on all CVMs. Retrying...(Hit Ctrl-C to abort)
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-08-13 20:58:35,421Z WARNING MainThread genesis_utils.py:1580 Failed to reach a node where Genesis is up. Ensure Genesis is running on all CVMs. Retrying...(Hit Ctrl-C to abort)
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-08-13 20:58:36,465Z WARNING MainThread genesis_utils.py:1580 Failed to reach a node where Genesis is up. Ensure Genesis is running on all CVMs. Retrying...(Hit Ctrl-C to abort)
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-08-13 20:58:37,470Z WARNING MainThread genesis_utils.py:1580 Failed to reach a node where Genesis is up. Ensure Genesis is running on all CVMs. Retrying...(Hit Ctrl-C to abort)
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-08-13 20:58:38,478Z WARNING MainThread genesis_utils.py:1580 Failed to reach a node where Genesis is up. Ensure Genesis is running on all CVMs. Retrying...(Hit Ctrl-C to abort)
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-08-13 20:58:39,486Z WARNING MainThread genesis_utils.py:1580 Failed to reach a node where Genesis is up. Ensure Genesis is running on all CVMs. Retrying...(Hit Ctrl-C to abort)
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-08-13 20:58:40,495Z WARNING MainThread genesis_utils.py:1580 Failed to reach a node where Genesis is up. Ensure Genesis is running on all CVMs. Retrying...(Hit Ctrl-C to abort)
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-08-13 20:58:41,503Z WARNING MainThread genesis_utils.py:1580 Failed to reach a node where Genesis is up. Ensure Genesis is running on all CVMs. Retrying...(Hit Ctrl-C to abort)
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-08-13 20:58:42,513Z WARNING MainThread genesis_utils.py:1580 Failed to reach a node where Genesis is up. Ensure Genesis is running on all CVMs. Retrying...(Hit Ctrl-C to abort)
module.nutanix_cluster2.null_resource.finalize_cluster[0]: Still creating... [30s elapsed]
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-08-13 20:58:43,522Z WARNING MainThread genesis_utils.py:1580 Failed to reach a node where Genesis is up. Ensure Genesis is running on all CVMs. Retrying...(Hit Ctrl-C to abort)
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-08-13 20:58:44,530Z WARNING MainThread genesis_utils.py:1580 Failed to reach a node where Genesis is up. Ensure Genesis is running on all CVMs. Retrying...(Hit Ctrl-C to abort)
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-08-13 20:58:45,538Z WARNING MainThread genesis_utils.py:1580 Failed to reach a node where Genesis is up. Ensure Genesis is running on all CVMs. Retrying...(Hit Ctrl-C to abort)
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-08-13 20:58:46,547Z WARNING MainThread genesis_utils.py:1580 Failed to reach a node where Genesis is up. Ensure Genesis is running on all CVMs. Retrying...(Hit Ctrl-C to abort)
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-08-13 20:58:47,555Z WARNING MainThread genesis_utils.py:1580 Failed to reach a node where Genesis is up. Ensure Genesis is running on all CVMs. Retrying...(Hit Ctrl-C to abort)
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-08-13 20:58:48,564Z WARNING MainThread genesis_utils.py:1580 Failed to reach a node where Genesis is up. Ensure Genesis is running on all CVMs. Retrying...(Hit Ctrl-C to abort)
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-08-13 20:58:49,572Z WARNING MainThread genesis_utils.py:1580 Failed to reach a node where Genesis is up. Ensure Genesis is running on all CVMs. Retrying...(Hit Ctrl-C to abort)
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-08-13 20:58:50,581Z WARNING MainThread genesis_utils.py:1580 Failed to reach a node where Genesis is up. Ensure Genesis is running on all CVMs. Retrying...(Hit Ctrl-C to abort)
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-08-13 20:58:51,590Z WARNING MainThread genesis_utils.py:1580 Failed to reach a node where Genesis is up. Ensure Genesis is running on all CVMs. Retrying...(Hit Ctrl-C to abort)
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-08-13 20:58:52,597Z WARNING MainThread genesis_utils.py:1580 Failed to reach a node where Genesis is up. Ensure Genesis is running on all CVMs. Retrying...(Hit Ctrl-C to abort)
module.nutanix_cluster2.null_resource.finalize_cluster[0]: Still creating... [40s elapsed]
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-08-13 20:58:53,604Z WARNING MainThread genesis_utils.py:1580 Failed to reach a node where Genesis is up. Ensure Genesis is running on all CVMs. Retrying...(Hit Ctrl-C to abort)
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-08-13 20:58:54,613Z WARNING MainThread genesis_utils.py:1580 Failed to reach a node where Genesis is up. Ensure Genesis is running on all CVMs. Retrying...(Hit Ctrl-C to abort)
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-08-13 20:58:55,620Z WARNING MainThread genesis_utils.py:1580 Failed to reach a node where Genesis is up. Ensure Genesis is running on all CVMs. Retrying...(Hit Ctrl-C to abort)
module.nutanix_cluster2.null_resource.finalize_cluster[0]: Still creating... [50s elapsed]
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): 2024-08-13 20:59:02,678Z CRITICAL MainThread cluster:1001 Could not discover all nodes specified. Please make sure that the SVMs from which you wish to create the cluster are not already part of another cluster. Undiscovered ips : 192.168.103.237,192.168.103.117
module.nutanix_cluster2.null_resource.finalize_cluster[0] (remote-exec): Connection to 192.168.103.237 closed.
╷
│ Error: remote-exec provisioner error
│ 
│   with module.nutanix_cluster1.null_resource.finalize_cluster[0],
│   on .terraform/modules/nutanix_cluster1/main.tf line 247, in resource "null_resource" "finalize_cluster":
│  247:   provisioner "remote-exec" {
│ 
│ error executing "/root/finalize-cluster-1603568147.sh": Process exited with status 1
╵
╷
│ Error: remote-exec provisioner error
│ 
│   with module.nutanix_cluster2.null_resource.finalize_cluster[0],
│   on .terraform/modules/nutanix_cluster2/main.tf line 247, in resource "null_resource" "finalize_cluster":
│  247:   provisioner "remote-exec" {
│ 
│ error executing "/root/finalize-cluster-2144986410.sh": Process exited with status 1
displague commented 2 months ago

Per previous conversations with @codinja1188, this may be due to specifying an even number of nutanix_node_count. The default in the example is 2. I'll try this again with 3 and if that succeeds, I'll try with 1. If 1 succeeds, I'll commit that as the new default.