Open RavinderReddyF5 opened 4 years ago
Comment by dannyk81 Tuesday Nov 06, 2018 at 21:50 GMT
Can you share your terraform configuration file?
Comment by theseao Wednesday Nov 07, 2018 at 17:51 GMT
Thanks @dannyk81 We're using multiple files, but here's a condensed version of relevant code:
Comment by dannyk81 Thursday Nov 08, 2018 at 17:02 GMT
@theseao Thanks for sharing your configuration, I believe the problem is in how the node name is declared in the pool_attachment
resource, specifically you seem to be missing the partition name definition. The node name must be a fullPath
and include the partition name.
So your pool_members_list
and pool_members_map
should include the nodes fullPath
names (including the partition name).
Alternatively, I would consider a more simplified approach: instead of relying on a single "master" list of all the nodes, I would define separate node
, pool
and pool_attachment
resources per each pool
.
This way you can use the node
's resource name
attribute (which is already in fullPath) as an input for the pool_attachment
resource, this will also provide automatic dependency between the resources.
@scshitole, looking at the code I believe we can avoid these issues if we add a ValidateFunc
to the node
attribute (similar to the one we have for the pool
attribute) to ensure that it is correctly defined.
Looks like another user just hit the same issue: https://github.com/f5devcentral/terraform-provider-bigip/issues/128.
Comment by scshitole Thursday Nov 08, 2018 at 17:10 GMT
@dannyk81 I see we have ValidateFunc
already for resource https://github.com/terraform-providers/terraform-provider-bigip/blob/5f2c641f27f09723bf2e2f1449bc12bb5928ee96/bigip/resource_bigip_ltm_node.go#L29
Comment by dannyk81 Thursday Nov 08, 2018 at 17:22 GMT
@scshitole you are on the wrong resource, you are referencing node
resource, the issue is in the pool_attachment
resource.
we need a similar ValidateFunc on the node
attribute in pool_attachment
resource, here:
https://github.com/terraform-providers/terraform-provider-bigip/blob/5f2c641f27f09723bf2e2f1449bc12bb5928ee96/bigip/resource_bigip_ltm_pool_attachment.go#L29-L34
The Validation Function must ensure the node
attribute conforms with: /<partition>/<node-name>:<port>
Comment by scshitole Thursday Nov 08, 2018 at 18:41 GMT
@dannyk81 I added however my tests are failing ``` === RUN TestAccBigipLtmPool_create --- FAIL: TestAccBigipLtmPool_create (0.01s) testing.go:538: Step 0 error: config is invalid: bigip_ltm_pool_attachment.test-pooltest-node: "node" must match /Partition/Name and contain letters, numbers or [.-]. e.g. /Common/my-pool === RUN TestAccBigipLtmPool_import
I do see the test file has
var TEST_POOL_NAME = fmt.Sprintf("/%s/test-pool", TEST_PARTITION)
var TEST_POOLNODE_NAME = fmt.Sprintf("/%s/test-node", TEST_PARTITION)
var TEST_POOLNODE_NAMEPORT = fmt.Sprintf("%s:443", TEST_POOLNODE_NAME)
resource "bigip_ltm_pool_attachment" "test-pool_test-node" {
pool = "+ TEST_POOL_NAME +
"
node = "+ TEST_POOLNODE_NAMEPORT +
"
depends_on = ["bigip_ltm_node.test-node", "bigip_ltm_pool.test-pool"]
}
Comment by dannyk81 Thursday Nov 08, 2018 at 18:45 GMT
@scshitole can you do a PR so I can review the code and see what's wrong? I can't really debug this from above output.
But, unless I'm wrong - you just took to the validation function from node
resource as is and that's incorrect, since the node
definition in the pool_attachment
also has the port number.
The regex in the validation function needs to accept the following:
/<partition>/<node-name>:<port>
Please provide a PR so we can review this properly.
Comment by theseao Thursday Nov 08, 2018 at 18:54 GMT
@dannyk81 Thank you, that resolved the issue. In this particular environment where we are only using the single "common" partition, I simply preprended /common/ to the the node declaration, rather than update the pool member list and map values.
Thank you also for the alternate suggestions. We have been contemplating/evaluating options to simply for operational sustainability, so that is valuable input.
Comment by dannyk81 Thursday Nov 08, 2018 at 19:00 GMT
You're welcome @theseao! and thanks for the feedback, we'll include the validation function so that it will be clearer when the configuration is not declared as expected.
Issue by theseao Tuesday Nov 06, 2018 at 18:32 GMT Originally opened as https://github.com/terraform-providers/terraform-provider-bigip/issues/20
Hello We are observing some inconsistent state behavior with the pool attachment resource.
Pools are successfully built and nodes attached.
However, any subsequent plan/apply with unchanged code results in attempts to re-attach nodes, even though they already exist.
We see similar behavior when attempting to remove a single node. Apply attempts to rebuild remaining nodes, errors out as they already exist and fails to remove node.
Abbreviated relevant example output below:
$ terraform apply <- Run Apply. Adds 12 objects, including 4 pool members.
Plan: 12 to add, 0 to change, 0 to destroy.
Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve.
Enter a value: yes
bigip_ltm_pool_attachment.attach_node[3]: Creation complete after 1s (ID: /Common/wtc_retpoc_redir_tdg_dev_pool-xwtcvd1retpoca02.corp.dom:80) bigip_ltm_pool_attachment.attach_node[2]: Creation complete after 2s (ID: /Common/wtc_retpoc_tdg_dev_pool-xwtcvd1retpoca02.corp.dom:443) bigip_ltm_pool_attachment.attach_node[1]: Creation complete after 2s (ID: /Common/wtc_retpoc_redir_tdg_dev_pool-xwtcvd1retpoca01.corp.dom:80) bigip_ltm_pool_attachment.attach_node[0]: Creation complete after 2s (ID: /Common/wtc_retpoc_tdg_dev_pool-xwtcvd1retpoca01.corp.dom:443)
Apply complete! Resources: 12 added, 0 changed, 0 destroyed.
$ terraform plan <- Immediately run Plan - code unchanged. 4 Pool members identified in state refresh. Yet it still wants to re-add them.
Refreshing Terraform state in-memory prior to plan... The refreshed state will be used to calculate this plan, but will not be persisted to local or remote state storage.
bigip_ltm_node.node[0]: Refreshing state... (ID: /Common/xwtcvd1retpoca01.corp.dom) bigip_ltm_node.node[1]: Refreshing state... (ID: /Common/xwtcvd1retpoca02.corp.dom) bigip_ltm_monitor.monitor[0]: Refreshing state... (ID: /Common/wtc_retpoc_tdg_dev_mon) bigip_ltm_monitor.monitor[1]: Refreshing state... (ID: /Common/wtc_retpoc_redir_tdg_dev_mon) bigip_ltm_pool.pool[0]: Refreshing state... (ID: /Common/wtc_retpoc_tdg_dev_pool) bigip_ltm_pool.pool[1]: Refreshing state... (ID: /Common/wtc_retpoc_redir_tdg_dev_pool) bigip_ltm_virtual_server.server[0]: Refreshing state... (ID: /Common/wtc_retpoc_tdg_dev_vs) bigip_ltm_virtual_server.server[1]: Refreshing state... (ID: /Common/wtc_retpoc_redir_tdg_dev_vs) bigip_ltm_pool_attachment.attach_node[2]: Refreshing state... (ID: /Common/wtc_retpoc_tdg_dev_pool-xwtcvd1retpoca02.corp.dom:443) bigip_ltm_pool_attachment.attach_node[3]: Refreshing state... (ID: /Common/wtc_retpoc_redir_tdg_dev_pool-xwtcvd1retpoca02.corp.dom:80) bigip_ltm_pool_attachment.attach_node[0]: Refreshing state... (ID: /Common/wtc_retpoc_tdg_dev_pool-xwtcvd1retpoca01.corp.dom:443) bigip_ltm_pool_attachment.attach_node[1]: Refreshing state... (ID: /Common/wtc_retpoc_redir_tdg_dev_pool-xwtcvd1retpoca01.corp.dom:80)
An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols:
Terraform will perform the following actions:
bigip_ltm_pool_attachment.attach_node[0] id:
node: "xwtcvd1retpoca01.corp.dom:443"
pool: "/Common/wtc_retpoc_tdg_dev_pool"
bigip_ltm_pool_attachment.attach_node[1] id:
node: "xwtcvd1retpoca01.corp.dom:80"
pool: "/Common/wtc_retpoc_redir_tdg_dev_pool"
bigip_ltm_pool_attachment.attach_node[2] id:
node: "xwtcvd1retpoca02.corp.dom:443"
pool: "/Common/wtc_retpoc_tdg_dev_pool"
bigip_ltm_pool_attachment.attach_node[3] id:
node: "xwtcvd1retpoca02.corp.dom:80"
pool: "/Common/wtc_retpoc_redir_tdg_dev_pool"
Plan: 4 to add, 0 to change, 0 to destroy