OpenNebula / terraform-provider-opennebula

Terraform provider for OpenNebula
https://www.terraform.io/docs/providers/opennebula/
Mozilla Public License 2.0
61 stars 53 forks source link

Data opennebula virtual network address range #550

Closed shurkys closed 1 week ago

shurkys commented 1 month ago

Community Note

Description

Refactor: Virtual network address range data source to enhance readability and maintainability.

References

Close #545

New or Affected Resource(s)

Checklist

treywelsh commented 1 month ago

Before reviewing in depth I wanted to discuss a bit about the consistency of the datasources and their names, I think it's important to keep things consistent across the provider when it's possible. Your datasource returns only one or several address ranges regarding if we provide address_range_id or not, so it may be better to rename it opennebula_virtual_network_address_ranges or to introduce 2 datasources.

For a bit more of context: Until now, in the provider we split the datasources regarding how much elements are returned. I don't pretend it's the better way but that's how we do for now. For instance there's both datasources opennebula_template and opennebula_templates regarding if we want to returns a single template or several templates. If opennebula_template match 0 or several templates it returns an error. opennebula_templates allows us to filter a bunch of templates and to sort them based on an attribute value (we may want to sort based on the register date) Here's the issue introducing data_opennebula_templates: #322 And the issue (but the related PR is not merged yet) introducing opennebula_virtual_machines: #536

Feel free to drop a comment to discuss.

shurkys commented 1 month ago

Now, in opennebula_virtual_network_address_range, if you specify the address_range_id, it will return only one range. Here's an example using HCL:

data "opennebula_virtual_network_address_range" "Controlplane" {
  virtual_network_id = data.opennebula_virtual_network.Controlplane.id
  address_range_id   = 0
}

And here's the corresponding output:

{
  "address_range_id" = tostring(0)
  "address_ranges" = tolist([
    {
      "ar_type" = "IP4"
      "custom" = tomap({})
      "global_prefix" = ""
      "held_ips" = toset([
        "192.168.1.101",
        "192.168.1.102",
      ])
      "ip4" = "192.168.1.101"
      "mac" = "02:00:c0:a8:01:65"
      "size" = 3
    },
  ])
  "id" = "83"
  "virtual_network_id" = 83
}

However, if you don't specify it, the data source will return the entire list of address ranges.

The division into opennebula_virtual_network_address_range and opennebula_virtual_network_address_ranges looks logical if we are talking about an approach like for AWS ec2_public_ipv4_pool, and ec2_public_ipv4_pools

When from opennebula_virtual_network_address_ranges we get a list of all the IDs and then substitute it into opennebula_virtual_network_address_range to get information about each But if the only difference is to display one or more ranges depending on the name, then this is strange for me

shurkys commented 1 month ago

Now, when using this data source opennebula_virtual_network_address_range, you must provide the address_range_id parameter, which specifies the ID of the address range you want information for.

Additionally, I've introduced a new data source opennebula_virtual_network_address_ranges that retrieves information about all address ranges associated with a virtual network. This data source does not require the address_range_id parameter and provides minimal information for all address ranges at once.

Also updated the documentation to reflect these changes.

shurkys commented 1 month ago

By the way, could you take a quick look at https://github.com/OpenNebula/terraform-provider-opennebula/pull/532 when you get a chance? It's closed now, but it might still be useful to review.

github-actions[bot] commented 2 weeks ago

This pull request is stale because it has been open for 30 days with no activity and it is not in a milestone. Remove 'status: stale' label or comment, or this will be closed in 5 days.