Closed s-areal closed 1 month ago
I tried deploying a vm and taiga on two networks and it worked fine
I was using version 1.9.3-rc1. Maybe you could provide more info on your version, networks, logs, and maybe the script you are trying to use.
I'm using 1.9.2, I didn't know there was a 1.9.3-rc1. Anyway I upgraded to 1.9.3-rc1 and the problem is exactly the same.
Version 1.9.3-rc1 with add_wg_access = true
Plan: 2 to add, 0 to change, 0 to destroy.
Changes to Outputs:
Version 1.9.3-rc1 with add_wg_access = false
Plan: 2 to add, 0 to change, 0 to destroy.
Changes to Outputs:
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
╷ │ Error: could not generate deployments data: failed to get node 1 endpoint: could not list node interfaces: context deadline exceeded
The error appears to be from node 1, not the wireguard. You could specify a node ID in both the network and the VM for another node ID and try again.
I checked node 1, and it does have a problem responding over RMB.
It turned the issue is caused by some nodes appearing as "Up" while they actually don't respond to RMB calls. So, in case of wireguard access = true, the deployer looks for a public node that is up and try to include it with the deployment (that's why node 1 in your case appeared while you didn't choose it). We added some extra checks to make sure the nodes chosen as access nodes are actually up and respond to RMB calls. The fix should be on it's way in the upcoming mainnet release.
Verified using version 1.9.3-rc2. Will be available in the next release 1.9.3
Tried the multinode example (https://github.com/threefoldtech/terraform-provider-grid/tree/development/examples/resources/multinode)
It uses WireGuard and it worked fine
I tried many times to create vms using terraform, and all failed. Tested on different nodes, differente configurations, different days.... My conclusion is that if using wireguard access = true it will not work!