We are using this code to provision NLB ,, it is an estrange situation because the error appears and after 1 or 2 times retrying its works….
For some time, while creating resources associated with network load balancers, like for example backend sets or listeners, we've been sometimes facing a following error:
Error Message: Invalid State Transition of NLB lifeCycle state from Updating to UpdatingThis error is happening randomly, and after retry it usually goes away. So, even though it's not critical for our current development, it is preventing us from creating an automated solution with TeamCity pipeline. Or at least, it's making it more complicated.I add an example how does such a message look like.
Error: 409-Conflict
│ Provider version: 4.44.0, released on 2021-09-15. This provider is 2 update(s) behind to current.
│ Service: Network Load Balancer Backend Set
│ Error Message: Invalid State Transition of NLB lifeCycle state from Updating to Updating
│ OPC request ID: e8a067439a9b8ec1e3c1f037c953b812/339752101C26FA69ECAA7F5D5FF54C1A/83D7D16C2A5A1777978EE67FFC75C115
│ Suggestion: The resource is in a conflicted state. Please retry again or contact support for help with service: Network Load Balancer Backend Set
I wonder if it could be in any way related to number of allowed load balancer attachments limit that we were dealing with earlier. Or perhaps it could be related with OCI terraform provider?
Just an additional information. We are currently using network load balancers. When I tried to replace them with L7 load balancers, the issues did not appear
Hello OCI - terraform team
We are using this code to provision NLB ,, it is an estrange situation because the error appears and after 1 or 2 times retrying its works….
For some time, while creating resources associated with network load balancers, like for example backend sets or listeners, we've been sometimes facing a following error: Error Message: Invalid State Transition of NLB lifeCycle state from Updating to UpdatingThis error is happening randomly, and after retry it usually goes away. So, even though it's not critical for our current development, it is preventing us from creating an automated solution with TeamCity pipeline. Or at least, it's making it more complicated.I add an example how does such a message look like. Error: 409-Conflict │ Provider version: 4.44.0, released on 2021-09-15. This provider is 2 update(s) behind to current. │ Service: Network Load Balancer Backend Set │ Error Message: Invalid State Transition of NLB lifeCycle state from Updating to Updating │ OPC request ID: e8a067439a9b8ec1e3c1f037c953b812/339752101C26FA69ECAA7F5D5FF54C1A/83D7D16C2A5A1777978EE67FFC75C115 │ Suggestion: The resource is in a conflicted state. Please retry again or contact support for help with service: Network Load Balancer Backend Set
I wonder if it could be in any way related to number of allowed load balancer attachments limit that we were dealing with earlier. Or perhaps it could be related with OCI terraform provider?
Just an additional information. We are currently using network load balancers. When I tried to replace them with L7 load balancers, the issues did not appear
--------This code was causing it: resource "oci_network_load_balancer_backend_set" "backend_set" { for_each = toset(var.service_list) depends_on = [ oci_network_load_balancer_network_load_balancer.network_load_balancer ] health_checker { protocol = "TCP" port = module.services[each.key].port retries = 9999 timeout_in_millis = 9999 } name = "bc-${module.services[each.key].name}" network_load_balancer_id = oci_network_load_balancer_network_load_balancer.network_load_balancer[0].id policy = "FIVE_TUPLE" is_preserve_source = false }
And this is the load balancer:
resource "oci_network_load_balancer_network_load_balancer" "network_load_balancer" {
Create one lb when service list has more more than 0 items
count = length(var.service_list) > 0 ? 1: 0 compartment_id = var.compartment_ocid display_name = "${var.env_name}-loadbalancer-${var.group_name}" subnet_id = var.subnet_id is_private = true network_security_group_ids = [ var.network_security_group_id ] is_preserve_source_destination = false }
Hope you can help me so see what its going on,,
thanks a lot for your support
Regards
Vicente Paniagua