RavinderReddyF5 / terraform-provider-bigip-version0.12

Terraform resources that can configure F5 BIGIP products
Mozilla Public License 2.0
0 stars 0 forks source link

Unable to retrieve route domain to properly display state for virtual server and nodes #123

Open RavinderReddyF5 opened 4 years ago

RavinderReddyF5 commented 4 years ago

Issue by swightkin Thursday Apr 18, 2019 at 17:55 GMT Originally opened as https://github.com/terraform-providers/terraform-provider-bigip/issues/92


This is my first issue, so I apologize if I've missed something.

For the function resourceBigipLtmNodeRead (within here resource_bigip_ltm_node.go) there is logic to check for the route domain. However the regex used does not properly capture the route domain information so the state file only shows the IP address.

This causes future plans/applies for nodes or virtual servers with route domains to try to destroy and recreate or update the object.This would cause any build processes to show that it failed since it doesnt not exit cleanly even though it does provision the objects as necessary.

RavinderReddyF5 commented 4 years ago

Comment by dannyk81 Thursday Apr 18, 2019 at 18:16 GMT


Hi @swightkin!

We are using Route Domainss in all our deployments and the current code works just fine for us (in fact I did a fix around this a while back to make it work properly https://github.com/f5devcentral/terraform-provider-bigip/pull/85), the state indeed only stores the IP address without the Route Domain suffix (%xx) since these suffixes are internal to the F5 and are not expected when declaring objects via the API.

Can you share your configuration? and the plan/apply outputs? steps to reproduce and details about the environment (TMOS version)

RavinderReddyF5 commented 4 years ago

Comment by swightkin Thursday Apr 18, 2019 at 18:57 GMT


Here is the configuration of it:

Create node

resource "bigip_ltm_node" "ptestweb01" { name = "/Common/ptestweb01.example.com_172.17.240.182" address = "172.17.240.182%20" connection_limit = "0" dynamic_ratio = "1" monitor = "default" rate_limit = "disabled"
}

Create the health monitor.

resource "bigip_ltm_monitor" "healthmon" { name = "/Common/test.com" parent = "/Common/http" destination = "*:80" send = "GET /HealthMonitor.aspx\r\n" receive = "200"
}

Create wldcard pool and attach the node. Then do again for 443 pool

resource "bigip_ltm_pool" "wild-pool" { name = "/Common/l_test.com_0_pool" load_balancing_mode = "round-robin" allow_snat = "yes" allow_nat = "yes"
monitors = ["/Common/test.com"] depends_on = ["bigip_ltm_monitor.healthmon"] }

resource "bigip_ltm_pool_attachment" "attach-node" { pool = "/Common/l_test.com_0_pool" node = "/Common/ptestweb01.example.com_172.17.240.182:*" depends_on = ["bigip_ltm_pool.wild-pool", "bigip_ltm_node.ptestweb01"]
}

Create the two VIPs for 80 and 443.

resource "bigip_ltm_virtual_server" "vs-http" { name = "/Common/l_test.com_http_80" destination = "172.17.240.54%20"
port = "80"
pool = "$/Common/l_test.com_0_pool"

This and Profiles make this a "Standard" type VIP

ip_protocol = "tcp" 
profiles = ["/Common/tcp", "/Common/http"]
source_address_translation = "automap"
depends_on = ["bigip_ltm_pool.wild-pool"]   

}

resource "bigip_ltm_virtual_server" "vs-https" { name = "/Common/l_test.com_https_443" destination = "172.17.240.54%20"
port = "443"
pool = "${bigip_ltm_pool.wild-pool.name}"

This and Profiles make this a "Standard" type VIP

ip_protocol = "tcp" 
profiles = ["/Common/tcp", "/Common/http"]
source_address_translation = "automap"
depends_on = ["bigip_ltm_pool.wild-pool"]   

}

Here is the plan output.

-/+ bigip_ltm_node.test (new resource required) id: "/Common/test.example.com_172.17.240.182" => (forces new resource) address: "172.17.240.182" => "172.17.240.182%20" (forces new resource) connection_limit: "0" => "0" dynamic_ratio: "1" => "1" monitor: "default" => "default" name: "/Common/test.example.com_172.17.240.182" => "/Common/test.example.com_172.17.240.182" rate_limit: "disabled" => "disabled" state: "user-up" => "user-up"

~ bigip_ltm_virtual_server.vs-http destination: "172.17.240.54" => "172.17.240.54%20" source: "0.0.0.0/0" => "0.0.0.0%20/0"

~ bigip_ltm_virtual_server.vs-https destination: "172.17.240.54" => "172.17.240.54%20" source: "0.0.0.0/0" => "0.0.0.0%20/0"

The apply is output was:

The output errored out b/c it tried to destroy the node but it was assigned to the pool still. I dont have the output of that just now and its in production so I cant do it just now.

Appreciate your help!

RavinderReddyF5 commented 4 years ago

Comment by swightkin Thursday Apr 18, 2019 at 19:00 GMT


Sorry, one note is that I replaced the names to remove specific information. The configuration vs plan may look mismatch due to me mistyping the find/replace items. The plan informatoin is really around the route domains as the names of objects all work as desired.

Thanks!

RavinderReddyF5 commented 4 years ago

Comment by dannyk81 Thursday Apr 18, 2019 at 19:10 GMT


mmm, ok... so you should remove all the %20 from your configuration since F5 will add these automatically.

also, your node attribute in the attachment resource looks a bit strange.

resource "bigip_ltm_pool_attachment" "attach-node" {
pool = "/Common/l_test.com_0_pool"
node = "/Common/ptestweb01.example.com_172.17.240.182:*"
depends_on = ["bigip_ltm_pool.wild-pool", "bigip_ltm_node.ptestweb01"]
}

if you want to use "Any service" it should be :0 (not :*) and you should reference the resource name attributes for pool and node so you don't need to declare dependency explicitly, like so:

resource "bigip_ltm_pool_attachment" "attach-node" {
  pool = "${bigip_ltm_pool.wild-pool.name}"
  node = "${bigip_ltm_node.ptestweb01.name}:0"
}
RavinderReddyF5 commented 4 years ago

Comment by swightkin Thursday Apr 18, 2019 at 19:35 GMT


We use our F5 and route domains to split DMZ workloads vs internal workloads. How would the F5 know what RD to use if we have RD 10 and 20? I will setup my lab to test this out as well to see how it behaves since our production environment is the only place we use route domains at this point.

You're correct on the attach part. That was a copy paste error on my part. I forgot to pull latest before copying. That as been corrected as it was throwing errors on applying.

RavinderReddyF5 commented 4 years ago

Comment by dannyk81 Thursday Apr 18, 2019 at 19:39 GMT


Assuming that you have separate partitions for the workloads, each partition should be assigned a default Route Domain and it is used by F5 on the relevant resources based on the partition is is being assigned to.

RavinderReddyF5 commented 4 years ago

Comment by swightkin Thursday Apr 18, 2019 at 20:02 GMT


They are all tied to the same partition "Common".If that's the case is there an option to pull the RD when you read the address?Sent via the Samsung Galaxy S8, an AT&T 5G Evolution capable smartphone -------- Original message --------From: Danny Kulchinsky notifications@github.com Date: 4/18/19 2:39 PM (GMT-06:00) To: terraform-providers/terraform-provider-bigip terraform-provider-bigip@noreply.github.com Cc: swightkin swightkin@gmail.com, Mention mention@noreply.github.com Subject: Re: [terraform-providers/terraform-provider-bigip] Unable to retrieve   route domain to properly display state for virtual server and nodes (#92) Assuming that you have separate partitions for the workloads, each partition should be assigned a default Route Domain and it is used by F5 on the relevant resources based on the partition is is being assigned to.

—You are receiving this because you were mentioned.Reply to this email directly, view it on GitHub, or mute the thread. {"api_version":"1.0","publisher":{"api_key":"05dde50f1d1a384dd78767c55493e4bb","name":"GitHub"},"entity":{"external_key":"github/terraform-providers/terraform-provider-bigip","title":"terraform-providers/terraform-provider-bigip","subtitle":"GitHub repository","main_image_url":"https://github.githubassets.com/images/email/message_cards/header.png","avatar_image_url":"https://github.githubassets.com/images/email/message_cards/avatar.png","action":{"name":"Open in GitHub","url":"https://github.com/terraform-providers/terraform-provider-bigip"}},"updates":{"snippets":[{"icon":"PERSON","message":"@dannyk81 in #92: Assuming that you have separate partitions for the workloads, each partition should be assigned a default Route Domain and it is used by F5 on the relevant resources based on the partition is is being assigned to."}],"action":{"name":"View Issue","url":"https://github.com/terraform-providers/terraform-provider-bigip/issues/92#issuecomment-484659348"}}} [ { "@context": "http://schema.org", "@type": "EmailMessage", "potentialAction": { "@type": "ViewAction", "target": "https://github.com/terraform-providers/terraform-provider-bigip/issues/92#issuecomment-484659348", "url": "https://github.com/terraform-providers/terraform-provider-bigip/issues/92#issuecomment-484659348", "name": "View Issue" }, "description": "View this Issue on GitHub", "publisher": { "@type": "Organization", "name": "GitHub", "url": "https://github.com" } } ]

RavinderReddyF5 commented 4 years ago

Comment by dannyk81 Thursday Apr 18, 2019 at 20:17 GMT


Unfortunately, I don't see how that would be possible with the current code.

RavinderReddyF5 commented 4 years ago

Comment by dannyk81 Thursday Apr 18, 2019 at 20:21 GMT


I suppose it would be possible to add a new resource attribute (route domain) and store it there, but that would require some testing.

RavinderReddyF5 commented 4 years ago

Comment by swightkin Thursday Apr 18, 2019 at 21:01 GMT


Our current implementation uses the route domains but we're migrating away from it at this point. I think it may be a good idea to account for this in the future though. Not sure how this is added as a feature request?

Again, I greatly appreciate your help on this!

RavinderReddyF5 commented 4 years ago

Comment by dannyk81 Thursday Apr 18, 2019 at 21:08 GMT


RD are supported when there's one RD per partition, assigning multiple RDs in the same partition is not supported.

I added the above since we needed this for our use case, I think your use case should indeed be supported but I currently don't have any free cycles to work on this.

Maybe @scshitole can take a look.

RavinderReddyF5 commented 4 years ago

Comment by swightkin Thursday Apr 18, 2019 at 22:25 GMT


Thank you!Sent via the Samsung Galaxy S8, an AT&T 5G Evolution capable smartphone null

RavinderReddyF5 commented 4 years ago

Comment by scshitole Friday Apr 19, 2019 at 00:27 GMT


@dannyk81 @swightkin we need to create new resource like bigip_net_routeDomain so we can add multiple RD

RavinderReddyF5 commented 4 years ago

Comment by dannyk81 Friday Apr 19, 2019 at 00:35 GMT


Yes, that would be needed as well, but it's not enough.

We need to add support for the route domain semantics in all the resources that implement it: node, virtual server, routes and self ips

as well supporting both variants (with and without route domain).

currently the provider handles route domains across all these resources as long as there's only one route domain per partition (and it is the default route domain of that partition).

RavinderReddyF5 commented 4 years ago

Comment by scshitole Friday Apr 19, 2019 at 22:15 GMT


@swightkin I might have fix for the same soon .. still working on my private branch

SJC-ML-00028512:terraform-provider-bigip shitole$ terraform apply -auto-approve
bigip_net_route_domain.rd: Creating...
  name:             "" => "somerd"
  rd_id:            "" => "20"
  vlans.#:          "" => "1"
  vlans.1122961241: "" => "rdvlan"
bigip_ltm_node.pvpsweb01: Creating...
  address:          "" => "172.17.240.182%20"
  connection_limit: "" => "0"
  dynamic_ratio:    "" => "1"
  monitor:          "" => "default"
  name:             "" => "/Common/pvpsweb01.iaai.com_172.17.240.182"
  rate_limit:       "" => "disabled"
  state:            "" => "user-up"
bigip_net_route_domain.rd: Creation complete after 0s (ID: somerd)
bigip_ltm_node.pvpsweb01: Creation complete after 0s (ID: /Common/pvpsweb01.iaai.com_172.17.240.182)

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
RavinderReddyF5 commented 4 years ago

Comment by scshitole Friday Apr 19, 2019 at 23:03 GMT


@swightkin

 terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  + bigip_ltm_node.pvpsweb01
      id:                           <computed>
      address:                      "172.17.240.182%20"
      connection_limit:             "0"
      dynamic_ratio:                "1"
      monitor:                      "default"
      name:                         "/Common/pvpsweb01.iaai.com_172.17.240.182"
      rate_limit:                   "disabled"
      state:                        "user-up"

  + bigip_ltm_pool.wild-pool
      id:                           <computed>
      allow_nat:                    "yes"
      allow_snat:                   "yes"
      load_balancing_mode:          "round-robin"
      name:                         "/Common/l_test.com_0_pool"
      reselect_tries:               "0"
      service_down_action:          "none"
      slow_ramp_time:               "10"

  + bigip_ltm_pool_attachment.attach-node
      id:                           <computed>
      node:                         "/Common/pvpsweb01.iaai.com_172.17.240.182:80"
      pool:                         "/Common/l_test.com_0_pool"

  + bigip_ltm_virtual_server.vs-http
      id:                           <computed>
      client_profiles.#:            <computed>
      destination:                  "172.17.240.54%20"
      fallback_persistence_profile: <computed>
      ip_protocol:                  "tcp"
      mask:                         "255.255.255.255"
      name:                         "/Common/l_test.com_http_80"
      persistence_profiles.#:       <computed>
      pool:                         "/Common/l_test.com_0_pool"
      port:                         "80"
      profiles.#:                   "2"
      profiles.1139270267:          "/Common/tcp"
      profiles.576276785:           "/Common/http"
      server_profiles.#:            <computed>
      snatpool:                     <computed>
      source:                       "0.0.0.0%20/0"
      source_address_translation:   "automap"
      translate_address:            <computed>
      translate_port:               <computed>
      vlans_enabled:                <computed>

  + bigip_net_route_domain.rd
      id:                           <computed>
      name:                         "somerd"
      rd_id:                        "20"
      vlans.#:                      "1"
      vlans.1122961241:             "rdvlan"

Plan: 5 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

SJC-ML-00028512:terraform-provider-bigip shitole$ terraform apply -auto-approve
bigip_ltm_pool.wild-pool: Creating...
  allow_nat:           "" => "yes"
  allow_snat:          "" => "yes"
  load_balancing_mode: "" => "round-robin"
  name:                "" => "/Common/l_test.com_0_pool"
  reselect_tries:      "" => "0"
  service_down_action: "" => "none"
  slow_ramp_time:      "" => "10"
bigip_net_route_domain.rd: Creating...
  name:             "" => "somerd"
  rd_id:            "" => "20"
  vlans.#:          "" => "1"
  vlans.1122961241: "" => "rdvlan"
bigip_net_route_domain.rd: Creation complete after 1s (ID: somerd)
bigip_ltm_node.pvpsweb01: Creating...
  address:          "" => "172.17.240.182%20"
  connection_limit: "" => "0"
  dynamic_ratio:    "" => "1"
  monitor:          "" => "default"
  name:             "" => "/Common/pvpsweb01.iaai.com_172.17.240.182"
  rate_limit:       "" => "disabled"
  state:            "" => "user-up"
bigip_ltm_node.pvpsweb01: Creation complete after 0s (ID: /Common/pvpsweb01.iaai.com_172.17.240.182)
bigip_ltm_pool.wild-pool: Creation complete after 1s (ID: /Common/l_test.com_0_pool)
bigip_ltm_pool_attachment.attach-node: Creating...
  node: "" => "/Common/pvpsweb01.iaai.com_172.17.240.182:80"
  pool: "" => "/Common/l_test.com_0_pool"
bigip_ltm_virtual_server.vs-http: Creating...
  client_profiles.#:            "" => "<computed>"
  destination:                  "" => "172.17.240.54%20"
  fallback_persistence_profile: "" => "<computed>"
  ip_protocol:                  "" => "tcp"
  mask:                         "" => "255.255.255.255"
  name:                         "" => "/Common/l_test.com_http_80"
  persistence_profiles.#:       "" => "<computed>"
  pool:                         "" => "/Common/l_test.com_0_pool"
  port:                         "" => "80"
  profiles.#:                   "0" => "2"
  profiles.1139270267:          "" => "/Common/tcp"
  profiles.576276785:           "" => "/Common/http"
  server_profiles.#:            "" => "<computed>"
  snatpool:                     "" => "<computed>"
  source:                       "" => "0.0.0.0%20/0"
  source_address_translation:   "" => "automap"
  translate_address:            "" => "<computed>"
  translate_port:               "" => "<computed>"
  vlans_enabled:                "" => "<computed>"
bigip_ltm_pool_attachment.attach-node: Creation complete after 0s (ID: /Common/l_test.com_0_pool-/Common/pvpsweb01.iaai.com_172.17.240.182:80)
bigip_ltm_virtual_server.vs-http: Creation complete after 1s (ID: /Common/l_test.com_http_80)

Apply complete! Resources: 5 added, 0 changed, 0 destroyed.
SJC-ML-00028512:terraform-provider-bigip shitole$ 
SJC-ML-00028512:terraform-provider-bigip shitole$ cat master.tf 
provider "bigip" {
  address = "x.x.x.x"
  username = "admin"
  password = "admin"
}

 resource "bigip_net_route_domain" "rd"
{

 name = "somerd"
 rd_id =  20
 vlans = ["rdvlan"]

} 

resource "bigip_ltm_node" "pvpsweb01" {
  name = "/Common/pvpsweb01.iaai.com_172.17.240.182"
  address = "172.17.240.182%20"
  connection_limit = "0"
  dynamic_ratio = "1"
  monitor = "default"
  rate_limit = "disabled" 
  depends_on = ["bigip_net_route_domain.rd"]
}
resource "bigip_ltm_pool" "wild-pool" {
name = "/Common/l_test.com_0_pool"
load_balancing_mode = "round-robin"
allow_snat = "yes"
allow_nat = "yes"
}

resource "bigip_ltm_pool_attachment" "attach-node" {
pool = "/Common/l_test.com_0_pool"
node = "/Common/pvpsweb01.iaai.com_172.17.240.182:80"
depends_on = [ "bigip_ltm_pool.wild-pool", "bigip_ltm_node.pvpsweb01"]
}

resource "bigip_ltm_virtual_server" "vs-http" {
name = "/Common/l_test.com_http_80"
destination = "172.17.240.54%20"
source = "0.0.0.0%20/0"
port = "80"
pool = "/Common/l_test.com_0_pool"
#This and Profiles make this a "Standard" type VIP
ip_protocol = "tcp"
profiles = ["/Common/tcp", "/Common/http"]
source_address_translation = "automap"
depends_on = ["bigip_ltm_pool.wild-pool"]
depends_on = ["bigip_net_route_domain.rd"]
}
SJC-ML-00028512:terraform-provider-bigip shitole$ 

terraform show
bigip_ltm_node.pvpsweb01:
  id = /Common/pvpsweb01.iaai.com_172.17.240.182
  address = 172.17.240.182
  connection_limit = 0
  dynamic_ratio = 1
  monitor = default
  name = /Common/pvpsweb01.iaai.com_172.17.240.182
  rate_limit = disabled
  state = user-up
bigip_ltm_pool.wild-pool:
  id = /Common/l_test.com_0_pool
  allow_nat = yes
  allow_snat = yes
  load_balancing_mode = round-robin
  monitors.# = 1
  monitors.0 = 
  name = /Common/l_test.com_0_pool
  reselect_tries = 0
  service_down_action = none
  slow_ramp_time = 10
bigip_ltm_pool_attachment.attach-node:
  id = /Common/l_test.com_0_pool-/Common/pvpsweb01.iaai.com_172.17.240.182:80
  node = /Common/pvpsweb01.iaai.com_172.17.240.182:80
  pool = /Common/l_test.com_0_pool
bigip_ltm_virtual_server.vs-http:
  id = /Common/l_test.com_http_80
  client_profiles.# = 0
  destination = 172.17.240.54
  fallback_persistence_profile = 
  ip_protocol = tcp
  irules.# = 0
  mask = 255.255.255.255
  name = /Common/l_test.com_http_80
  persistence_profiles.# = 0
  policies.# = 0
  pool = /Common/l_test.com_0_pool
  port = 80
  profiles.# = 2
  profiles.1139270267 = /Common/tcp
  profiles.576276785 = /Common/http
  server_profiles.# = 0
  snatpool = 
  source = 0.0.0.0/0
  source_address_translation = automap
  translate_address = enabled
  translate_port = enabled
  vlans.# = 0
  vlans_enabled = false
bigip_net_route_domain.rd:
  id = somerd
  name = somerd
  rd_id = 20
  vlans.# = 1
  vlans.1122961241 = rdvlan
RavinderReddyF5 commented 4 years ago

Comment by swightkin Friday Apr 19, 2019 at 23:59 GMT


Thanks @scshitole ! I see that you're defining the RD as part of the file. In our situation we use RD for logical DMZ vs internal separation. So we'd have hundred of virtual servers using RD 10 and hundreds using RD20. If we have to add the net_route_domain for each virtual server that's created via TF, how would that react?

My concern would be if we did a terraform destory on an applicatoin, it'd attempt to remove the route domain that's shared with all other virtual servers.

I hope you have a good weekend!

RavinderReddyF5 commented 4 years ago

Comment by skathiresan87 Saturday Apr 27, 2019 at 20:22 GMT


Hi, I am currently facing the same issue. State file only shows the IP address not the route domain ID.

if we are creating 6th node in a pool, all the previous running 5 nodes are recreated and that is causing the pool to go down [health check fails] and hence the outage.

As a workaround, I am using lifecycle to ignore "address"

lifecycle { ignore_changes = [ "address" ] }

Not sure if there is any other way.

RavinderReddyF5 commented 4 years ago

Comment by RavinderReddyF5 Friday Jun 07, 2019 at 18:35 GMT


@swightkin Hope you are Expecting Below Behavior, Can you please review?

root@terraformclient:~/Go_Workspace/src/github.com/terraform-providers/terraform-provider-bigip# cat route_domain.tf
provider "bigip" {
  address = "xxx.xxx.xxx.xxx"
  username = "admin"
  password = "F5site02"
}

resource "bigip_ltm_virtual_server" "test_vs3" {
        name = "/Common/test_vs2"
        destination = "172.17.240.53%2"
        source ="0.0.0.0%2/0"
        port = 0
}root@terraformclient:~/Go_Workspace/src/github.com/terraform-providers/terraform-provider-bigip# terraform init

Initializing the backend...

Initializing provider plugins...

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
root@terraformclient:~/Go_Workspace/src/github.com/terraform-providers/terraform-provider-bigip# terraform apply
bigip_ltm_virtual_server.test_vs3: Refreshing state... [id=/Common/test_vs2]

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # bigip_ltm_virtual_server.test_vs3 will be created
  + resource "bigip_ltm_virtual_server" "test_vs3" {
      + client_profiles              = (known after apply)
      + destination                  = "172.17.240.53%2"
      + fallback_persistence_profile = (known after apply)
      + id                           = (known after apply)
      + ip_protocol                  = (known after apply)
      + mask                         = "255.255.255.255"
      + name                         = "/Common/test_vs2"
      + persistence_profiles         = (known after apply)
      + port                         = 0
      + profiles                     = (known after apply)
      + server_profiles              = (known after apply)
      + snatpool                     = (known after apply)
      + source                       = "0.0.0.0%2/0"
      + source_address_translation   = (known after apply)
      + translate_address            = (known after apply)
      + translate_port               = (known after apply)
      + vlans_enabled                = (known after apply)
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

bigip_ltm_virtual_server.test_vs3: Creating...
bigip_ltm_virtual_server.test_vs3: Creation complete after 1s [id=/Common/test_vs2]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

root@terraformclient:~/Go_Workspace/src/github.com/terraform-providers/terraform-provider-bigip# terraform show
# bigip_ltm_virtual_server.test_vs3:
resource "bigip_ltm_virtual_server" "test_vs3" {
    client_profiles            = []
    destination                = "172.17.240.53%2"
    id                         = "/Common/test_vs2"
    ip_protocol                = "any"
    mask                       = "255.255.255.255"
    name                       = "/Common/test_vs2"
    persistence_profiles       = []
    port                       = 0
    profiles                   = [
        "/Common/fastL4",
    ]
    server_profiles            = []
    source                     = "0.0.0.0%2/0"
    source_address_translation = "none"
    translate_address          = "enabled"
    translate_port             = "enabled"
    vlans_enabled              = false
}

root@terraformclient:~/Go_Workspace/src/github.com/terraform-providers/terraform-provider-bigip# terraform apply
bigip_ltm_virtual_server.test_vs3: Refreshing state... [id=/Common/test_vs2]

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
RavinderReddyF5 commented 4 years ago

Comment by RavinderReddyF5 Tuesday Jun 11, 2019 at 09:45 GMT


@swightkin

I hope below Behavior solves your issues. please comment. so that i can do Pull request with change. This will solve comments mentioned by :https://github.com/terraform-providers/terraform-provider-bigip/issues/92#issuecomment-485040777

root@terraformclient:~/Go_Workspace/src/github.com/terraform-providers/terraform-provider-bigip# cat route_domain.tf
provider "bigip" {
  address = "xxx.xxx.xxx.xxx"
  username = "admin"
  password = "xxxxxxx"
}

resource "bigip_ltm_pool" "wild-pool" {
        name = "/Common/test-pool"
        load_balancing_mode = "round-robin"
        allow_snat = "yes"
        allow_nat = "yes"
}
resource "bigip_ltm_node" "ltm-node" {
  name = "/Common/webservice"
  address = "172.17.240.182%2"
  connection_limit = "0"
  dynamic_ratio = "1"
  monitor = "default"
  rate_limit = "disabled"
}

resource "bigip_ltm_pool_attachment" "attach-node" {
        pool = "${bigip_ltm_pool.wild-pool.name}"
        node ="${bigip_ltm_node.ltm-node.name}:0"
}
root@terraformclient:~/Go_Workspace/src/github.com/terraform-providers/terraform-provider-bigip# terraform init

Initializing the backend...

Initializing provider plugins...

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
root@terraformclient:~/Go_Workspace/src/github.com/terraform-providers/terraform-provider-bigip# terraform apply

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # bigip_ltm_node.ltm-node will be created
  + resource "bigip_ltm_node" "ltm-node" {
      + address          = "172.17.240.182%2"
      + connection_limit = 0
      + dynamic_ratio    = 1
      + id               = (known after apply)
      + monitor          = "default"
      + name             = "/Common/webservice"
      + rate_limit       = "disabled"
      + state            = (known after apply)
    }

  # bigip_ltm_pool.wild-pool will be created
  + resource "bigip_ltm_pool" "wild-pool" {
      + allow_nat           = "yes"
      + allow_snat          = "yes"
      + id                  = (known after apply)
      + load_balancing_mode = "round-robin"
      + monitors            = (known after apply)
      + name                = "/Common/test-pool"
      + reselect_tries      = (known after apply)
      + service_down_action = (known after apply)
      + slow_ramp_time      = (known after apply)
    }

  # bigip_ltm_pool_attachment.attach-node will be created
  + resource "bigip_ltm_pool_attachment" "attach-node" {
      + id   = (known after apply)
      + node = "/Common/webservice:0"
      + pool = "/Common/test-pool"
    }

Plan: 3 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

bigip_ltm_pool.wild-pool: Creating...
bigip_ltm_node.ltm-node: Creating...
bigip_ltm_node.ltm-node: Creation complete after 0s [id=/Common/webservice]
bigip_ltm_pool.wild-pool: Creation complete after 0s [id=/Common/test-pool]
bigip_ltm_pool_attachment.attach-node: Creating...
bigip_ltm_pool_attachment.attach-node: Creation complete after 0s [id=/Common/test-pool-/Common/webservice:0]

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
root@terraformclient:~/Go_Workspace/src/github.com/terraform-providers/terraform-provider-bigip# terraform show
# bigip_ltm_node.ltm-node:
resource "bigip_ltm_node" "ltm-node" {
    address          = "172.17.240.182%2"
    connection_limit = 0
    dynamic_ratio    = 1
    id               = "/Common/webservice"
    monitor          = "default"
    name             = "/Common/webservice"
    rate_limit       = "disabled"
    state            = "unchecked"
}

# bigip_ltm_pool.wild-pool:
resource "bigip_ltm_pool" "wild-pool" {
    allow_nat           = "yes"
    allow_snat          = "yes"
    id                  = "/Common/test-pool"
    load_balancing_mode = "round-robin"
    monitors            = [
        "",
    ]
    name                = "/Common/test-pool"
    reselect_tries      = 0
    service_down_action = "none"
    slow_ramp_time      = 0
}

# bigip_ltm_pool_attachment.attach-node:
resource "bigip_ltm_pool_attachment" "attach-node" {
    id   = "/Common/test-pool-/Common/webservice:0"
    node = "/Common/webservice:0"
    pool = "/Common/test-pool"
}

root@terraformclient:~/Go_Workspace/src/github.com/terraform-providers/terraform-provider-bigip# terraform apply
bigip_ltm_pool.wild-pool: Refreshing state... [id=/Common/test-pool]
bigip_ltm_node.ltm-node: Refreshing state... [id=/Common/webservice]
bigip_ltm_pool_attachment.attach-node: Refreshing state... [id=/Common/test-pool-/Common/webservice:0]

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
RavinderReddyF5 commented 4 years ago

Comment by swightkin Tuesday Jun 11, 2019 at 09:53 GMT


I am away on vacation and unable to fully review just now. I'll look at this when I return. I appreciate your help!


From: RavinderReddyF5 notifications@github.com Sent: Tuesday, June 11, 2019 5:45:52 AM To: terraform-providers/terraform-provider-bigip Cc: swightkin; Mention Subject: Re: [terraform-providers/terraform-provider-bigip] Unable to retrieve route domain to properly display state for virtual server and nodes (#92)

@swightkinhttps://github.com/swightkin

I hope below Behavior solves your issues. please comment. so that i can do Pull request with change. This will solve comments mentioned by :#92 (comment)https://github.com/terraform-providers/terraform-provider-bigip/issues/92#issuecomment-485040777

root@terraformclient:~/Go_Workspace/src/github.com/terraform-providers/terraform-provider-bigip# cat route_domain.tf provider "bigip" { address = "xxx.xxx.xxx.xxx" username = "admin" password = "xxxxxxx" }

resource "bigip_ltm_pool" "wild-pool" { name = "/Common/test-pool" load_balancing_mode = "round-robin" allow_snat = "yes" allow_nat = "yes" } resource "bigip_ltm_node" "ltm-node" { name = "/Common/webservice" address = "172.17.240.182%2" connection_limit = "0" dynamic_ratio = "1" monitor = "default" rate_limit = "disabled" }

resource "bigip_ltm_pool_attachment" "attach-node" { pool = "${bigip_ltm_pool.wild-pool.name}" node ="${bigip_ltm_node.ltm-node.name}:0" } root@terraformclient:~/Go_Workspace/src/github.com/terraform-providers/terraform-provider-bigip# terraform init

Initializing the backend...

Initializing provider plugins...

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work.

If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary. root@terraformclient:~/Go_Workspace/src/github.com/terraform-providers/terraform-provider-bigip# terraform apply

An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols:

Terraform will perform the following actions:

bigip_ltm_node.ltm-node will be created

Plan: 3 to add, 0 to change, 0 to destroy.

Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve.

Enter a value: yes

bigip_ltm_pool.wild-pool: Creating... bigip_ltm_node.ltm-node: Creating... bigip_ltm_node.ltm-node: Creation complete after 0s [id=/Common/webservice] bigip_ltm_pool.wild-pool: Creation complete after 0s [id=/Common/test-pool] bigip_ltm_pool_attachment.attach-node: Creating... bigip_ltm_pool_attachment.attach-node: Creation complete after 0s [id=/Common/test-pool-/Common/webservice:0]

Apply complete! Resources: 3 added, 0 changed, 0 destroyed. root@terraformclient:~/Go_Workspace/src/github.com/terraform-providers/terraform-provider-bigip# terraform show

bigip_ltm_node.ltm-node:

resource "bigip_ltm_node" "ltm-node" { address = "172.17.240.182%2" connection_limit = 0 dynamic_ratio = 1 id = "/Common/webservice" monitor = "default" name = "/Common/webservice" rate_limit = "disabled" state = "unchecked" }

bigip_ltm_pool.wild-pool:

resource "bigip_ltm_pool" "wild-pool" { allow_nat = "yes" allow_snat = "yes" id = "/Common/test-pool" load_balancing_mode = "round-robin" monitors = [ "", ] name = "/Common/test-pool" reselect_tries = 0 service_down_action = "none" slow_ramp_time = 0 }

bigip_ltm_pool_attachment.attach-node:

resource "bigip_ltm_pool_attachment" "attach-node" { id = "/Common/test-pool-/Common/webservice:0" node = "/Common/webservice:0" pool = "/Common/test-pool" }

root@terraformclient:~/Go_Workspace/src/github.com/terraform-providers/terraform-provider-bigip# terraform apply bigip_ltm_pool.wild-pool: Refreshing state... [id=/Common/test-pool] bigip_ltm_node.ltm-node: Refreshing state... [id=/Common/webservice] bigip_ltm_pool_attachment.attach-node: Refreshing state... [id=/Common/test-pool-/Common/webservice:0]

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/terraform-providers/terraform-provider-bigip/issues/92?email_source=notifications&email_token=AL3VGWHBC74QO5JGHAOQGADPZ5X5BA5CNFSM4HG7DJS2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODXMSEGI#issuecomment-500769305, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AL3VGWC3PWYVARKMQXIKC5LPZ5X5BANCNFSM4HG7DJSQ.

RavinderReddyF5 commented 4 years ago

Comment by RavinderReddyF5 Friday Jun 21, 2019 at 17:55 GMT


@swightkin Did you get chance to review your expectation?

RavinderReddyF5 commented 4 years ago

Comment by swightkin Friday Jun 21, 2019 at 19:23 GMT


I'll review this by Monday. Sorry for the delay! Playing catch-up after vacation.

Thank you!


From: RavinderReddyF5 notifications@github.com Sent: Friday, June 21, 2019 12:55:29 PM To: terraform-providers/terraform-provider-bigip Cc: swightkin; Mention Subject: Re: [terraform-providers/terraform-provider-bigip] Unable to retrieve route domain to properly display state for virtual server and nodes (#92)

@swightkinhttps://github.com/swightkin Did you get chance to review your expectation?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/terraform-providers/terraform-provider-bigip/issues/92?email_source=notifications&email_token=AL3VGWHJOQLMQ4GTJFPRR3DP3UIZDA5CNFSM4HG7DJS2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODYJEZRI#issuecomment-504515781, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AL3VGWH6VLGSDOVN7IOGX7LP3UIZDANCNFSM4HG7DJSQ.

RavinderReddyF5 commented 4 years ago

Comment by swightkin Friday Jun 21, 2019 at 20:17 GMT


@RavinderReddyF5

That appears to take care of the issue I am having! Does it track the route domain for just the node IP or does it include virtual server destination/source?

Thanks,