F5Networks / terraform-provider-bigip

Terraform resources that can configure F5 BIG-IP products
https://registry.terraform.io/providers/F5Networks/bigip/latest/docs
Mozilla Public License 2.0
103 stars 119 forks source link

When will this provider support TMOS 15.0/15.1 #301

Closed init4 closed 4 years ago

init4 commented 4 years ago

As per title, looks like it doesn't work properly on those versions at the moment.

papineni87 commented 4 years ago

It works for me, Any specific resource failing for you in bigip 15 ?

init4 commented 4 years ago

My comment is based on what is stated in README.md - but maybe I am having a different problem:


bash# cat app1.tf provider "bigip" {

address = "${aws_instance.bigip1.public_ip}"

address = "3.21.46.240" username = "admin" password = "xxxxx" }

resource "bigip_ltm_monitor" "mon_app1-http" { name = "/Common/mon_app1-http" parent = "/Common/http" send = "GET /\r\n" timeout = "16" interval = "5" }

resource "bigip_ltm_pool" "p_app1-http" { name = "/Common/p_app1-http" load_balancing_mode = "round-robin" monitors = ["/Common/mon_app1-http"] allow_snat = "yes" allow_nat = "yes" depends_on = [bigip_ltm_monitor.mon_app1-http] }

resource "bigip_ltm_pool_attachment" "attach_node1" { pool = bigip_ltm_pool.p_app1-http.name node = "/Common/${aws_instance.slag.private_ip}:80" depends_on = [bigip_ltm_pool.p_app1-http] }

resource "bigip_ltm_pool_attachment" "attach_node2" { pool = bigip_ltm_pool.p_app1-http.name node = "/Common/${aws_instance.sludge.private_ip}:80" depends_on = [bigip_ltm_pool.p_app1-http] }

resource "bigip_ltm_pool_attachment" "attach_node3" { pool = bigip_ltm_pool.p_app1-http.name node = "/Common/${aws_instance.snarl.private_ip}:80" depends_on = [bigip_ltm_pool.p_app1-http] }

resource "bigip_ltm_pool_attachment" "attach_node4" { pool = bigip_ltm_pool.p_app1-http.name node = "/Common/${aws_instance.swoop.private_ip}:80" depends_on = [bigip_ltm_pool.p_app1-http] }

resource "bigip_ltm_virtual_server" "vs_app1-http" { pool = bigip_ltm_pool.p_app1-http.name name = "/Common/vs_app1-http" destination = "10.0.1.100" port = 80 source_address_translation = "automap" profiles = ["/Common/f5-tcp-progressive","/Common/http"] depends_on = [bigip_ltm_pool.p_app1-http] } bash# bash# terraform plan -out ./demo Refreshing Terraform state in-memory prior to plan... The refreshed state will be used to calculate this plan, but will not be persisted to local or remote state storage.

bigip_ltm_monitor.mon_app1-http: Refreshing state... [id=/Common/mon_app1-http] bigip_ltm_pool.p_app1-http: Refreshing state... [id=/Common/p_app1-http] bigip_ltm_virtual_server.vs_app1-http: Refreshing state... [id=/Common/vs_app1-http] data.aws_availability_zones.available: Refreshing state... aws_vpc.dinobots: Refreshing state... [id=vpc-0c68582461cea010e] aws_subnet.external: Refreshing state... [id=subnet-0738b9c8089d64883] aws_subnet.management: Refreshing state... [id=subnet-09840a631ab434bdd] aws_internet_gateway.default: Refreshing state... [id=igw-0d8beeb4ec8d72288] aws_subnet.internal: Refreshing state... [id=subnet-0570848f3a9f72291] aws_security_group.allow_all: Refreshing state... [id=sg-0f157689218a5cdea] aws_route.internet_access: Refreshing state... [id=r-rtb-079fa873e6b4abdec1080289494] aws_route_table_association.route_table_internal: Refreshing state... [id=rtbassoc-079dcdafdb3fa6da2] aws_instance.sludge: Refreshing state... [id=i-0a744197aa907244d] aws_instance.snarl: Refreshing state... [id=i-01f8d87172cf19b40] aws_instance.slag: Refreshing state... [id=i-0196a1058116dfd04] aws_instance.swoop: Refreshing state... [id=i-0d0e63195bb6c491a] aws_route_table_association.route_table_external: Refreshing state... [id=rtbassoc-0ec0b372cb1c307db] aws_instance.bigip1: Refreshing state... [id=i-0b2bdd96a8db5e033] aws_network_interface.external: Refreshing state... [id=eni-07c78568ef9a32761] aws_network_interface.internal: Refreshing state... [id=eni-096479abb3e0ca93b] aws_eip.eip_vip: Refreshing state... [id=eipalloc-0d582ed36a426cec4]


An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols:

Terraform will perform the following actions:

bigip_ltm_monitor.mon_app1-http will be created

Plan: 7 to add, 0 to change, 0 to destroy.


This plan was saved to: ./demo

To perform exactly these actions, run the following command to apply: terraform apply "./demo"

bash# bash# terraform apply "./demo" bigip_ltm_monitor.mon_app1-http: Creating... bigip_ltm_monitor.mon_app1-http: Creation complete after 6s [id=/Common/mon_app1-http] bigip_ltm_pool.p_app1-http: Creating... bigip_ltm_pool.p_app1-http: Creation complete after 1s [id=/Common/p_app1-http] bigip_ltm_pool_attachment.attach_node2: Creating... bigip_ltm_pool_attachment.attach_node3: Creating... bigip_ltm_pool_attachment.attach_node4: Creating... bigip_ltm_pool_attachment.attach_node1: Creating... bigip_ltm_virtual_server.vs_app1-http: Creating... bigip_ltm_virtual_server.vs_app1-http: Creation complete after 4s [id=/Common/vs_app1-http]

Error: Provider produced inconsistent result after apply

When applying changes to bigip_ltm_pool_attachment.attach_node2, provider "registry.terraform.io/-/bigip" produced an unexpected new value for was present, but now absent.

This is a bug in the provider, which should be reported in the provider's own issue tracker.

Error: Provider produced inconsistent result after apply

When applying changes to bigip_ltm_pool_attachment.attach_node1, provider "registry.terraform.io/-/bigip" produced an unexpected new value for was present, but now absent.

This is a bug in the provider, which should be reported in the provider's own issue tracker.

Error: Provider produced inconsistent result after apply

When applying changes to bigip_ltm_pool_attachment.attach_node4, provider "registry.terraform.io/-/bigip" produced an unexpected new value for was present, but now absent.

This is a bug in the provider, which should be reported in the provider's own issue tracker.

Error: Provider produced inconsistent result after apply

When applying changes to bigip_ltm_pool_attachment.attach_node3, provider "registry.terraform.io/-/bigip" produced an unexpected new value for was present, but now absent.

This is a bug in the provider, which should be reported in the provider's own issue tracker.

bash#

papineni87 commented 4 years ago

With terraform-provider version :1.2.1 , we have to create nodes first and then reference it in pool attachment. ( https://www.terraform.io/docs/providers/bigip/r/bigip_ltm_pool_attachment.html )

I tried with below config and it works for me.

bigip version : 15.0 , terraform-provider version:1.2.1

terraform-provider-bigip papineni$ cat test.tf provider "bigip" { address = "x.x.x.x" username = "xxxx" password = "xxxx" }

resource "bigip_ltm_monitor" "mon_app1-http" { name = "/Common/mon_app1-http" parent = "/Common/http" send = "GET /\r\n" timeout = "16" interval = "5" }

resource "bigip_ltm_pool" "p_app1-http" { name = "/Common/p_app1-http" load_balancing_mode = "round-robin" monitors = ["/Common/mon_app1-http"] allow_snat = "yes" allow_nat = "yes" depends_on = [bigip_ltm_monitor.mon_app1-http] }

resource "bigip_ltm_node" "node1" { name = "/Common/terraform_node1" address = "192.168.30.1" }

resource "bigip_ltm_node" "node2" { name = "/Common/terraform_node2" address = "192.168.40.1" }

resource "bigip_ltm_pool_attachment" "attach_node1" { pool = bigip_ltm_pool.p_app1-http.name node = "${bigip_ltm_node.node1.name}:80" depends_on = [bigip_ltm_pool.p_app1-http] }

resource "bigip_ltm_pool_attachment" "attach_node2" { pool = bigip_ltm_pool.p_app1-http.name node = "${bigip_ltm_node.node2.name}:80" depends_on = [bigip_ltm_pool.p_app1-http] }

resource "bigip_ltm_virtual_server" "vs_app1-http" { pool = bigip_ltm_pool.p_app1-http.name name = "/Common/vs_app1-http" destination = "10.0.1.100" port = 80 source_address_translation = "automap" depends_on = [bigip_ltm_pool.p_app1-http] }

terraform-provider-bigip papineni$ terraform apply 2020/06/22 22:51:44 [WARN] Log levels other than TRACE are currently unreliable, and are supported only for backward compatibility. Use TF_LOG=TRACE to see Terraform's internal logs.

An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols:

Terraform will perform the following actions:

bigip_ltm_monitor.mon_app1-http will be created

Plan: 7 to add, 0 to change, 0 to destroy.

Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve.

Enter a value: yes

bigip_ltm_node.node1: Creating... bigip_ltm_monitor.mon_app1-http: Creating... bigip_ltm_node.node2: Creating... bigip_ltm_node.node2: Creation complete after 2s [id=/Common/terraform_node2] bigip_ltm_node.node1: Creation complete after 2s [id=/Common/terraform_node1] bigip_ltm_monitor.mon_app1-http: Still creating... [10s elapsed] bigip_ltm_monitor.mon_app1-http: Creation complete after 11s [id=/Common/mon_app1-http] bigip_ltm_pool.p_app1-http: Creating... bigip_ltm_pool.p_app1-http: Creation complete after 2s [id=/Common/p_app1-http] bigip_ltm_pool_attachment.attach_node2: Creating... bigip_ltm_pool_attachment.attach_node1: Creating... bigip_ltm_virtual_server.vs_app1-http: Creating... bigip_ltm_pool_attachment.attach_node1: Creation complete after 1s [id=/Common/p_app1-http-/Common/terraform_node1:80] bigip_ltm_pool_attachment.attach_node2: Creation complete after 1s [id=/Common/p_app1-http-/Common/terraform_node2:80] bigip_ltm_virtual_server.vs_app1-http: Creation complete after 5s [id=/Common/vs_app1-http]

Apply complete! Resources: 7 added, 0 changed, 0 destroyed.

init4 commented 4 years ago

Agree, the problem was I was trying to attach a node without defining it first. Requiring a node object confused me as its not required when using the iControl API. I can define a pool object and directly define the pool members by IP:port.