Open eaglejack85 opened 4 years ago
It looks more like a terraform issue where subnet is trying to be deleted before the LB gets deleted completely. Can you verify this in newer versions of oci-cloud-controller-manager and confirm if the issue still persists?
We create 2 oci load balancers by executing the following lines in one of our helm charts:
and one internal load balancer by executing these lines:
Environments are provisioned with terraform 0.12.19 and the following set of terraform providers: oci 3.74.0 helm 0.10.2 kubernetes 1.9.0 tls 2.1.1
No issues with creation, all works well in both dev and production OCI tenancy. Problem arises randomly in destruction of environment in production OCI tenancy, where one of the 3 load balancers is not destroyed, causing the load balancer subnet to not being able to be destroyed:
This happens randomly only in production OCI tenancy
BUG REPORT
Versions
CCM Version:
Environment:
kubectl version
): v1.15.7ORACLE_BUGZILLA_PRODUCT="Oracle Linux 7" ORACLE_BUGZILLA_PRODUCT_VERSION=7.6 ORACLE_SUPPORT_PRODUCT="Oracle Linux" ORACLE_SUPPORT_PRODUCT_VERSION=7.6
uname -a
): Linux vault01 4.14.35-1844.5.3.el7uek.x86_64 #2 SMP Wed May 8 21:50:52 PDT 2019 x86_64 x86_64 x86_64 GNU/LinuxWhat happened?
One of the OCI load balancers in production tenancy created from oci-cloud-controller randomly doesnt receive work request to be terminated by terraform destroy of the helm release
What you expected to happen?
All OCI load balancers to be consistently terminated
How to reproduce it (as minimally and precisely as possible)?
Create a terraform module to deploy helm release which creates an OCI load balancer by deploying a k8s service by setting annotation service.beta.kubernetes.io/oci-load-balancer-* and try destroying the module from terraform
Anything else we need to know?