Closed utianayuba closed 2 years ago
This issue is not happening on OpenStack Wallaby:
Any suggestion?
I am not sure it's caused by W/X if so, it might be openstack Octavia and Barbican issue?
and looks like we are having this logic:
if c.osClient.Barbican != nil && ing.Spec.TLS != nil {
nameFilter := fmt.Sprintf("kube_ingress_%s_%s_%s", c.config.ClusterName, ing.Namespace, ing.Name)
if err := openstackutil.DeleteSecrets(c.osClient.Barbican, nameFilter); err != nil {
return fmt.Errorf("failed to remove Barbican secrets: %v", err)
}
logger.Info("Barbican secrets deleted")
}
so from your log in octvia worker looks like ovtavia also try to delete the secret? I guess we need check the logic of LB and secret in CPO and openstack to figure out who should be the right one to delete and whether 2nd need ignore 404 Not found error..
I guess Octavia worker fails to delete LB because Barbican no longer has the needed secret.
I guess Octavia worker fails to delete LB because Barbican no longer has the needed secret.
if so which means if return fmt.Errorf("failed to remove Barbican secrets: %v", err)
is in the log
then we should add a tolerance to check whether it's NOTFOUND issue then continue to delete action
the original issue report seems doesn't have that? anyway , this might be an enhancement point
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
still happening on octavia_ingress_controller_tag=v1.24.6
/kind bug
What happened:
What you expected to happen: OpenStack LB deleted automatically right after TLS Octavia ingress on Kube is deleted
How to reproduce it:
Anything else we need to know?: Error messages on /var/log/kolla/octavia/octavia-worker.log
It looks like the OpenStack secret was deleted first before deleting the lb.
Environment: