vmware / terraform-provider-nsxt

Terraform Provider for VMware NSX
https://registry.terraform.io/providers/vmware/nsxt/
Mozilla Public License 2.0
131 stars 85 forks source link

nsxt_edge_transport_node not usable for existing edge #1459

Open martinrohrbach opened 2 weeks ago

martinrohrbach commented 2 weeks ago

Describe the bug

We have a setup where we use edges from a compute manager that is not registered with the NSX instance that will use the edge (and we also cannot register the computer manager because it is already registered with a different NSX instance and we cannot use multi NSX due to vLCM integration).

It is absolutely possible to use an existing edge for a new transport node, the API docs (https://dp-downloads.broadcom.com/api-content/apis/API_NTDCRA_001/4.2.1/html/api_includes/method_CreateTransportNodeWithDeploymentInfo.html) have this to say:

The request should either provide node_deployement_info or node_id.

If the host node (hypervisor) or edge node (router) is already added in system then it can be converted to transport node by providing node_id in request.

However, I cannot specify a "node_id" using the nsxt_edge_transport_node resource afaics.

Would it be possible to add this property to the resource so we can either pass in a "deployment_config" OR a "node_id" (or "existing_deployment_id" or something) for the creation of the edge transport node?

Reproduction steps

  1. Create and register an edge VM in NSX (outside terraform)
  2. Use nsxt_edge_transport_node without "deployment_config" to use the existing edge VM
  3. Fails with "General error has occured"

Expected behavior

We can create a edge transprt node from an existing edge VM.

Additional context

No response

annakhm commented 2 weeks ago

Hi @martinrohrbach, the doc you quoted also defines node_id as both Deprecated and ReadOnly. We normally don't expose deprecated attributes in the provider. Looks like there's a mismatch in NSX documentation.

martinrohrbach commented 2 weeks ago

Hi @annakhm you're right, I didn't notice that. When you say "normally" what does that mean for this issue?

I actually took the API call that the provider crafted, removed the "node_deployment_info" block, added the "node_id" and POSTed thet to /api/v1/trasport-nodes. That had exactly the effect that I wanted and the edge is now configured and usable and it's also what the docs suggest according to the snippet I posted above.

All in all I think the read-only and deprecated flags are wrong and that's where the mismatch comes from.

Is the mismatch something that you can address internally or would you like me to open a service request to get this checked / fixed?

annakhm commented 2 weeks ago

Thanks for checking this @martinrohrbach. We'll discuss this internally with the team.

ksamoray commented 23 hours ago

Hi Martin, I've tried to run the same workflow, doing the following:

LMK if this addresses your issue.

martinrohrbach commented 10 hours ago

@ksamoray Thanks for looking into this! Admittedly I've not tried that approach because it involves a step "outside" the terraform code so to speak by having to import the edge in a seperate step.

Our idea was to create (and register to NSX, that can be done during the deploy of the OVA) the edge VM in one terraform repo and then simply add the internal ID to the config of a second terraform repo where it will configure the edge as described above. Sure, there is a similar step in looking up the internal ID and adding it to the config but we don't have to mess with terraform imports inbetween.

Did you discuss what I described above and do think that is not an option for the provider? Because I'd still like to have that feature and it does seem to be supported by the API as per the documentation (minus the deprecation flag that is there for some reason). If it is just a matter of time then I'm happy to wait. If not then I obviously don't have a choice and we'll go with your workaround.

Also did you get any feedback on the read-only and deprecated flags (and the documentation excerpts I provided above) internally? I can still raise an SR in parallel if that helps?