Closed wninobla closed 8 years ago
I believe this was just fixed in #17, but it won't be available until the end of the month. Can you use an explicit IP address for the listener instead for now?
I'll look at #17 for my own curiosity. But yeah for now I can do it explicitly and do an internal range as I'm just testing. Once the code is committed for use I'll change my stuff. Thanks!
I'll push out a hotfix later today.
Ok, @wninobla, can you download the latest hotfix and let me know if that fixes your problem?
The property is called "ipv4".
Ok let me give it a go after I eat something. 😊
OK, I uploaded the hotfix into the terraform folder and reran the code I'm using and still same error. I tried it with both public_ipv4 and ipv4 as the attribute:
[root@192-168-104-251 cisco_demo]# terraform plan -out deploydemo-lb.plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but
will not be persisted to local or remote state storage.
Error running plan: 1 error(s) occurred:
* Resource 'ddcloud_vip_pool.www_pool' does not have attribute 'public_ipv4' for variable 'ddcloud_vip_pool.www_pool.public_ipv4'
[root@192-168-104-251 cisco_demo]#
[root@192-168-104-251 cisco_demo]# terraform plan -out deploydemo-lb.plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but
will not be persisted to local or remote state storage.
Error running plan: 1 error(s) occurred:
* Resource 'ddcloud_vip_pool.www_pool' does not have attribute 'ipv4' for variable 'ddcloud_vip_pool.www_pool.ipv4'
[root@192-168-104-251 cisco_demo]#
Maybe its called something else? I am not sure what the property reference is but its not working for me. The line I edited was this one:
destination_address = "${ddcloud_vip_pool.www_pool.public_ipv4}"
AND
destination_address = "${ddcloud_vip_pool.www_pool.ipv4}"
Isn't the IPv4 address on the listener, not the pool?
Yep, that'd be it. Pools don't have an address; the virtual listener that uses the pool has the address.
Nevermind - I was calling the wrong resource type - its ddcloud_virtual_listener and not ddcloud_vip_pool. That's what happens when I don't focus and eat first...
Ah - answered at the same time. Anyhow, it appears to work now. I am going to run it and see what happens. If it completes I will close this one out. Thanks!
Here's a working configuration I've been testing with:
provider "ddcloud" {
region = "AU"
}
resource "ddcloud_networkdomain" "test_net_domain" {
name = "af_terraform_domain"
description = "Adam's Terraform test domain (do not delete)."
plan = "ADVANCED"
datacenter = "AU9"
}
resource "ddcloud_vlan" "test_vlan" {
name = "af_terraform-vlan"
description = "Adam's Terraform test VLAN (do not delete)."
networkdomain = "${ddcloud_networkdomain.test_net_domain.id}"
ipv4_base_address = "192.168.17.0"
ipv4_prefix_size = 24
}
resource "ddcloud_vip_node" "test_node" {
name = "af_terraform_node"
description = "Adam's Terraform test VIP node (do not delete)."
ipv4_address = "192.168.17.10"
status = "ENABLED"
networkdomain = "${ddcloud_networkdomain.test_net_domain.id}"
depends_on = ["ddcloud_vlan.test_vlan"]
}
resource "ddcloud_vip_pool" "test_pool" {
name = "af_terraform_pool"
description = "Adam's Terraform test VIP pool (do not delete)."
load_balance_method = "ROUND_ROBIN"
service_down_action = "NONE",
slow_ramp_time = 5,
networkdomain = "${ddcloud_networkdomain.test_net_domain.id}"
depends_on = ["ddcloud_vlan.test_vlan"]
}
resource "ddcloud_vip_pool_member" "test_pool_test_node" {
pool = "${ddcloud_vip_pool.test_pool.id}"
node = "${ddcloud_vip_node.test_node.id}"
status = "ENABLED"
}
resource "ddcloud_virtual_listener" "test_virtual_listener" {
name = "af_terraform_listener"
protocol = "HTTP"
optimization_profiles = ["TCP"]
pool = "${ddcloud_vip_pool.test_pool.id}"
ipv4 = "192.168.18.10"
networkdomain = "${ddcloud_networkdomain.test_net_domain.id}"
depends_on = ["ddcloud_vip_pool_member.test_pool_test_node"]
}
Oops! :)
For completeness sake here is what I am current at in the .tf file:
/*
* This configuration will create the following demo infrastructure for Cisco Demo's:
*
* - A default network domain assigned as Advanced
* - (3) VLANs Called DMZ Network, TRUST Network, and Utility Network
* - (2) Web servers in the DMZ Network and (2) App and (2) Database servers in the TRUST Network and (1) Utility
* Server in the Utility Netork
* - Configure a NAT rule assigned to the Utility server and opening 3389 from the Internet inbound to the server
* - Load Balancing configuration for (2) Web Servers in Round Robin pool under port 80 (Pending)
*
*/
provider "ddcloud" {
# User name and password can also be specified via DD_COMPUTE_USER and DD_COMPUTE_PASSWORD environment variables.
"username" = "clouddemo_api"
"password" = "(Hidden)" # Watch out for escaping if your password contains characters such as "$".
"region" = "NA" # The DD compute region code (e.g. "AU", "NA", "EU")
}
resource "ddcloud_networkdomain" "ciscodemo-domain" {
name = "Cisco Demo via Terraform"
description = "This is an automated Terraform demo network domain."
datacenter = "NA12" # The ID of the data centre in which to create your network domain.
}
resource "ddcloud_vlan" "dmz-vlan" {
name = "Cisco DMZ Network"
description = "This is an automated Terraform VLAN designated for DMZ hosts."
networkdomain = "${ddcloud_networkdomain.ciscodemo-domain.id}"
# VLAN's default network: 192.168.1.1 -> 192.168.1.254 (netmask = 255.255.255.0)
ipv4_base_address = "192.168.1.0"
ipv4_prefix_size = 24
depends_on = ["ddcloud_networkdomain.ciscodemo-domain"]
}
resource "ddcloud_vlan" "trust-vlan" {
name = "Cisco TRUST Network"
description = "This is an automated Terraform VLAN designated for TRUST hosts."
networkdomain = "${ddcloud_networkdomain.ciscodemo-domain.id}"
# VLAN's default network: 192.168.2.1 -> 192.168.2.254 (netmask = 255.255.255.0)
ipv4_base_address = "192.168.2.0"
ipv4_prefix_size = 24
depends_on = ["ddcloud_networkdomain.ciscodemo-domain"]
}
resource "ddcloud_vlan" "utility-vlan" {
name = "Cisco Utility Network"
description = "This is an automated Terraform VLAN designated for utility hosts."
networkdomain = "${ddcloud_networkdomain.ciscodemo-domain.id}"
# VLAN's default network: 192.168.3.1 -> 192.168.3.254 (netmask = 255.255.255.0)
ipv4_base_address = "192.168.3.0"
ipv4_prefix_size = 24
depends_on = ["ddcloud_networkdomain.ciscodemo-domain"]
}
resource "ddcloud_server" "web01-server" {
name = "WEB01"
admin_password = "password"
memory_gb = 4
cpu_count = 2
networkdomain = "${ddcloud_networkdomain.ciscodemo-domain.id}"
primary_adapter_ipv4 = "192.168.1.11"
dns_primary = "8.8.8.8"
dns_secondary = "8.8.4.4"
os_image_name = "CentOS 7 64-bit 2 CPU"
# The image disk (part of the original server image). If size_gb is larger than the image disk's original
# size, it will be expanded (specifying a smaller size is not supported). You don't have to specify this
# but, if you don't, then Terraform will keep treating the ddcloud_server resource as modified.
disk {
scsi_unit_id = 0
size_gb = 10
}
# Added tagging to label image as its deployed out based on its function. Tag key is REQUIRED to be created
# BEFORE executing this script
tag {
name = "Application"
value = "Web"
}
auto_start = "TRUE"
depends_on = ["ddcloud_vlan.dmz-vlan"]
}
resource "ddcloud_server" "web02-server" {
name = "WEB02"
admin_password = "password"
memory_gb = 4
cpu_count = 2
networkdomain = "${ddcloud_networkdomain.ciscodemo-domain.id}"
primary_adapter_ipv4 = "192.168.1.12"
dns_primary = "8.8.8.8"
dns_secondary = "8.8.4.4"
os_image_name = "CentOS 7 64-bit 2 CPU"
# The image disk (part of the original server image). If size_gb is larger than the image disk's original
# size, it will be expanded (specifying a smaller size is not supported). You don't have to specify this
# but, if you don't, then Terraform will keep treating the ddcloud_server resource as modified.
disk {
scsi_unit_id = 0
size_gb = 10
}
# Added tagging to label image as its deployed out based on its function. Tag key is REQUIRED to be created
# BEFORE executing this script
tag {
name = "Application"
value = "Web"
}
auto_start = "TRUE"
depends_on = ["ddcloud_vlan.dmz-vlan"]
}
resource "ddcloud_server" "app01-server" {
name = "APP01"
admin_password = "password"
memory_gb = 4
cpu_count = 2
networkdomain = "${ddcloud_networkdomain.ciscodemo-domain.id}"
primary_adapter_ipv4 = "192.168.2.11"
dns_primary = "8.8.8.8"
dns_secondary = "8.8.4.4"
os_image_name = "CentOS 7 64-bit 2 CPU"
# The image disk (part of the original server image). If size_gb is larger than the image disk's original
# size, it will be expanded (specifying a smaller size is not supported). You don't have to specify this
# but, if you don't, then Terraform will keep treating the ddcloud_server resource as modified.
disk {
scsi_unit_id = 0
size_gb = 10
}
# Added tagging to label image as its deployed out based on its function. Tag key is REQUIRED to be created
# BEFORE executing this script
tag {
name = "Application"
value = "App"
}
auto_start = "TRUE"
depends_on = ["ddcloud_vlan.trust-vlan"]
}
resource "ddcloud_server" "app02-server" {
name = "APP02"
admin_password = "password"
memory_gb = 4
cpu_count = 2
networkdomain = "${ddcloud_networkdomain.ciscodemo-domain.id}"
primary_adapter_ipv4 = "192.168.2.12"
dns_primary = "8.8.8.8"
dns_secondary = "8.8.4.4"
os_image_name = "CentOS 7 64-bit 2 CPU"
# The image disk (part of the original server image). If size_gb is larger than the image disk's original
# size, it will be expanded (specifying a smaller size is not supported). You don't have to specify this
# but, if you don't, then Terraform will keep treating the ddcloud_server resource as modified.
disk {
scsi_unit_id = 0
size_gb = 10
}
# Added tagging to label image as its deployed out based on its function. Tag key is REQUIRED to be created
# BEFORE executing this script
tag {
name = "Application"
value = "App"
}
auto_start = "TRUE"
depends_on = ["ddcloud_vlan.trust-vlan"]
}
resource "ddcloud_server" "db01-server" {
name = "DB01"
admin_password = "password"
memory_gb = 4
cpu_count = 2
networkdomain = "${ddcloud_networkdomain.ciscodemo-domain.id}"
primary_adapter_ipv4 = "192.168.2.13"
dns_primary = "8.8.8.8"
dns_secondary = "8.8.4.4"
os_image_name = "CentOS 7 64-bit 2 CPU"
# The image disk (part of the original server image). If size_gb is larger than the image disk's original
# size, it will be expanded (specifying a smaller size is not supported). You don't have to specify this
# but, if you don't, then Terraform will keep treating the ddcloud_server resource as modified.
disk {
scsi_unit_id = 0
size_gb = 10
}
# Added tagging to label image as its deployed out based on its function. Tag key is REQUIRED to be created
# BEFORE executing this script
tag {
name = "Application"
value = "Database"
}
auto_start = "TRUE"
depends_on = ["ddcloud_vlan.trust-vlan"]
}
resource "ddcloud_server" "db02-server" {
name = "DB02"
admin_password = "password"
memory_gb = 4
cpu_count = 2
networkdomain = "${ddcloud_networkdomain.ciscodemo-domain.id}"
primary_adapter_ipv4 = "192.168.2.14"
dns_primary = "8.8.8.8"
dns_secondary = "8.8.4.4"
os_image_name = "CentOS 7 64-bit 2 CPU"
# The image disk (part of the original server image). If size_gb is larger than the image disk's original
# size, it will be expanded (specifying a smaller size is not supported). You don't have to specify this
# but, if you don't, then Terraform will keep treating the ddcloud_server resource as modified.
disk {
scsi_unit_id = 0
size_gb = 10
}
# Added tagging to label image as its deployed out based on its function. Tag key is REQUIRED to be created
# BEFORE executing this script
tag {
name = "Application"
value = "Database"
}
auto_start = "TRUE"
depends_on = ["ddcloud_vlan.trust-vlan"]
}
resource "ddcloud_server" "util01-server" {
name = "UTIL01"
admin_password = "password"
memory_gb = 4
cpu_count = 2
networkdomain = "${ddcloud_networkdomain.ciscodemo-domain.id}"
primary_adapter_ipv4 = "192.168.3.11"
dns_primary = "8.8.8.8"
dns_secondary = "8.8.4.4"
os_image_name = "Win2012 R2 Std 2 CPU"
# The image disk (part of the original server image). If size_gb is larger than the image disk's original
# size, it will be expanded (specifying a smaller size is not supported). You don't have to specify this
# but, if you don't, then Terraform will keep treating the ddcloud_server resource as modified.
disk {
scsi_unit_id = 0
size_gb = 10
}
# Added tagging to label image as its deployed out based on its function. Tag key is REQUIRED to be created
# BEFORE executing this script
tag {
name = "Application"
value = "Utility"
}
auto_start = "TRUE"
depends_on = ["ddcloud_vlan.utility-vlan"]
}
resource "ddcloud_nat" "util01-server-nat" {
networkdomain = "${ddcloud_networkdomain.ciscodemo-domain.id}"
private_ipv4 = "${ddcloud_server.util01-server.primary_adapter_ipv4}"
# public_ipv4 is computed at deploy time.
depends_on = ["ddcloud_vlan.utility-vlan"]
}
resource "ddcloud_vip_node" "web01-www-site" {
name = "WEB01_NODE"
description = "Web Server assigned to act as a load balanced node"
networkdomain = "${ddcloud_networkdomain.ciscodemo-domain.id}"
ipv4_address = "192.168.1.11"
status = "ENABLED"
depends_on = ["ddcloud_server.web01-server"]
}
resource "ddcloud_vip_node" "web02-www-site" {
name = "WEB02_NODE"
description = "Web Server assigned to act as a load balanced node"
networkdomain = "${ddcloud_networkdomain.ciscodemo-domain.id}"
ipv4_address = "192.168.1.12"
status = "ENABLED"
depends_on = ["ddcloud_server.web02-server"]
}
resource "ddcloud_vip_pool" "www_pool" {
name = "www_pool"
description = "Test pool for providing WWW services"
load_balance_method = "ROUND_ROBIN"
service_down_action = "NONE",
slow_ramp_time = 5,
networkdomain = "${ddcloud_networkdomain.ciscodemo-domain.id}"
depends_on = ["ddcloud_vip_node.web01-www-site"]
}
resource "ddcloud_vip_pool_member" "www_pool_web01_node" {
pool = "${ddcloud_vip_pool.www_pool.id}"
node = "${ddcloud_vip_node.web01-www-site.id}"
port = 80
status = "ENABLED"
depends_on = ["ddcloud_vip_pool.www_pool"]
}
resource "ddcloud_vip_pool_member" "www_pool_web02_node" {
pool = "${ddcloud_vip_pool.www_pool.id}"
node = "${ddcloud_vip_node.web02-www-site.id}"
port = 80
status = "ENABLED"
depends_on = ["ddcloud_vip_pool.www_pool"]
}
resource "ddcloud_virtual_listener" "www_virtual_listener" {
name = "www_virtual_listener"
protocol = "HTTP"
optimization_profiles = ["TCP"]
pool = "${ddcloud_vip_pool.www_pool.id}"
networkdomain = "${ddcloud_networkdomain.ciscodemo-domain.id}"
depends_on = ["ddcloud_vip_pool.www_pool"]
}
resource "ddcloud_firewall_rule" "util01-rdp-in" {
name = "util01_server.RDP.Inbound"
placement = "first"
action = "accept" # Valid values are "accept" or "drop."
enabled = true
ip_version = "ipv4"
protocol = "tcp"
destination_address = "${ddcloud_nat.util01-server-nat.public_ipv4}"
destination_port = "3389"
networkdomain = "${ddcloud_networkdomain.ciscodemo-domain.id}"
}
resource "ddcloud_firewall_rule" "vip-www-in" {
name = "www_virtual_listener.WWW.Inbound"
placement = "first"
action = "accept" # Valid values are "accept" or "drop."
enabled = true
ip_version = "ipv4"
protocol = "tcp"
destination_address = "${ddcloud_virtual_listener.www_virtual_listener.ipv4}"
destination_port = "80"
networkdomain = "${ddcloud_networkdomain.ciscodemo-domain.id}"
depends_on = ["ddcloud_virtual_listener.www_virtual_listener"]
}
...and yes I know my password is embedded in there. Call me lazy but I wanted to make this work first. It'll change again after I disable this and work on how to pass the argument as a variable either at the command line or setting it as a variable which I thought I saw an example for in here somewhere. In any case I just ripped it out. =)
OK last update for the evening so I can rest my eyes. So, the code looks like it works. I have a minor issue with inserting the second node but I think it was just a fat finger of the pool reference. In any case, the one last thing I'll report is that on 2 different occasions when spinning up a Windows host the plan check pulls out what looks to be changes to the VM which causes it to be deleted and then recreated.
<snip>
-/+ ddcloud_server.util01-server
admin_password: "<sensitive>" => "<sensitive>" (attribute changed)
auto_start: "" => "true"
cpu_count: "2" => "2"
customer_image_id: "" => "<computed>"
customer_image_name: "" => "<computed>"
disk.#: "1" => "1"
disk.3184022418.disk_id: "9d572849-9fa9-4ae3-822a-57c361db78e1" => "<computed>"
disk.3184022418.scsi_unit_id: "0" => "0"
disk.3184022418.size_gb: "50" => "50"
disk.3184022418.speed: "STANDARD" => "STANDARD"
dns_primary: "" => "8.8.8.8" (forces new resource)
dns_secondary: "" => "8.8.4.4" (forces new resource)
memory_gb: "4" => "4"
name: "UTIL01" => "UTIL01"
networkdomain: "0e94d76e-9f21-4731-ba83-6473cf66c613" => "0e94d76e-9f21-4731-ba83-6473cf66c613"
os_image_id: "c509d9db-81ce-466b-8ab8-277cee5c964c" => "<computed>"
os_image_name: "" => "Win2012 R2 Std 2 CPU" (forces new resource)
primary_adapter_ipv4: "192.168.3.11" => "192.168.3.11"
primary_adapter_ipv6: "2607:f480:211:1287:1dcd:32d3:eeb3:7175" => "<computed>"
primary_adapter_vlan: "de4f6344-b0d5-4469-82ec-8dd53479eb43" => "<computed>"
public_ipv4: "" => "<computed>"
tag.#: "1" => "1"
tag.3818412570.name: "Application" => "Application"
tag.3818412570.value: "Utility" => "Utility"
</snip>
I showed the resource attributes from terraform and it was not the same as the linux hosts. Not sure if something was just not updated fast enough but the server looked live to me.
[root@192-168-104-251 cisco_demo]# terraform state show ddcloud_server.util01-server
id = 69199b03-9ddf-433b-9d6a-ee50f31e832a
public_ipv4 =
tag.# = 1
tag.3818412570.name = Application
tag.3818412570.value = Utility
[root@192-168-104-251 cisco_demo]# terraform state show ddcloud_server.db01-server
id = 2532ee1b-61c0-4e9f-9f0e-647e5d6822ba
admin_password = password
auto_start = true
cpu_count = 2
disk.# = 1
disk.219226128.disk_id =
disk.219226128.scsi_unit_id = 0
disk.219226128.size_gb = 10
disk.219226128.speed = STANDARD
dns_primary = 8.8.8.8
dns_secondary = 8.8.4.4
memory_gb = 4
name = DB01
networkdomain = 0e94d76e-9f21-4731-ba83-6473cf66c613
os_image_id = 03176f26-353e-45b7-90fc-7d68dca5123a
os_image_name = CentOS 7 64-bit 2 CPU
primary_adapter_ipv4 = 192.168.2.13
primary_adapter_ipv6 = 2607:f480:211:1263:5183:68c4:6626:d820
primary_adapter_vlan = a520ccae-257c-4512-bede-e60344cc494a
public_ipv4 =
tag.# = 1
tag.3883352642.name = Application
tag.3883352642.value = Database
[root@192-168-104-251 cisco_demo]# terraform state show ddcloud_server.util01-server
id = 69199b03-9ddf-433b-9d6a-ee50f31e832a
public_ipv4 =
tag.# = 1
tag.3818412570.name = Application
tag.3818412570.value = Utility
Anyway, I can file that under a different case if you like. Just let me know. Thanks!
Interesting - it looks like the DNS settings for the server have changed (either in config or state).
Would you mind creating a separate issue for that one? It'll just make it easier if we have one item per issue (or I might lose track of things).
I'm closing this issue then, but feel free to reopen it if I've misunderstood.
Yeah - it just finished and the error on the lb add was this:
Error applying plan:
1 error(s) occurred:
* ddcloud_vip_pool_member.www_pool_web02_node: Request to add VIP node '5f6f8f94-639c-4ecd-a521-08e065268ad6' as a member of pool '42ee90c0-b6e7-4ff6-8a5d-2270b79978b3' failed with status code 400 (UNEXPECTED_ERROR): Unexpected error.
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
Re-running it after it allowed it to join just fine. Makes me think it foobared when trying to add both items at the same time. I will maybe change the dependency on it though I am unsure how to tell it check to ensure another node is in the pool first before adding the next node in. Rather weird...
By the way, @wninobla, did you know you can use Terraform's "count" property on resources to make multiple instances of the same resource?
Ah, yes, sorry I have seen that one before.
It's because you can't have 2 operations modifying pool memberships in a network domain at the same time (a CloudControl limitation). The problem is that CloudControl returns UNEXPECTED_ERROR when they really mean RESOURCE_BUSY. We will be modifying the ddcloud
provider to only process 1 membership (per network domain) at a time, sometime in the next month or so but for now, as a workaround, you can simply use the depends_on
attribute for the various ddcloud_vip_pool_member
instances to serialise their execution (e.g. ddcloud_vip_pool_member.member3
depends on ddcloud_vip_pool_member.member2
, which depends on ddcloud_vip_pool_member.member1
).
(and you don't need to care if another node is already in the pool)
Cool on the counts for terraform. I am still learning as I just picked it up today withno reading really of the documentation. Not bad eh.
On the VIP pool adds makes sense. I can make it serial as you say as I didn't think about that even though thats what I am doing on the other ones... duh!
I had to figure it out the hard way; the docs do not tell you that :)
I am writing code similar to the NAT section in that I want to grab the assigned IP address (public in this case though it can be private if I specify it as an option) and assign it to a firewall rule. The bottom section of my code is as follows:
It errors out with this in my shell session:
The documentation doesn't specify what the attribute is so is there one or do I call something else?