Open berghaus opened 1 year ago
I'd like to figure out where this problem is coming from, but could use some help on where to look :confused:
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
I have the same issue, can this please be reopened?
There are similar issues that were also closed with no solution #1995 #1250
INFO [2024-04-10T16:06:38Z] ensuring security group ingress=REDACTED
INFO [2024-04-10T16:06:38Z] ensured security group ingress=REDACTED
INFO [2024-04-10T16:06:38Z] secret created in Barbican ingress=REDACTED secretName=REDACTED secretRef=REDACTED
INFO [2024-04-10T16:06:38Z] creating listener lbID=REDACTED listenerName=REDACTED
E0410 16:06:41.106992 12 controller.go:548] failed to create openstack resources for ingress REDACTED: error creating listener: Bad request with: [POST https://REDACTED/v2.0/lbaas/listeners], error message: {"faultcode": "Client", "faultstring": "Could not retrieve certificate: ['https://REDACTED/v1/secrets/REDACTED', 'https://REDACTEDl/v1/secrets/REDACTED']", "debuginfo": null}
I0410 16:06:41.107218 12 event.go:364] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"REDACTED", Name:"REDACTED", UID:"REDACTED", APIVersion:"networking.k8s.io/v1", ResourceVersion:"12066708", FieldPath:""}): type: 'Warning' reason: 'Failed' Failed to create openstack resources for ingress REDACTED: error creating listener: Bad request with: [POST https://REDACTED/v2.0/lbaas/listeners], error message: {"faultcode": "Client", "faultstring": "Could not retrieve certificate: ['https://REDACTED/v1/secrets/REDACTED', 'https://REDACTED/v1/secrets/REDACTED']", "debuginfo": null}
Sure. This is an error on the Octavia side, can you provide the logs of octavia-api for this POST request to see the root cause?
Here are the related logs.
Listener creation is failing with Not Found: Secrets container not found
error in octavia side.
As you can see the following octavia and barbican log, there is a request for secret creation and the secret is created correctly on barbican but there is no request for secret container creation and there is no such a container existing in barbican.
Hi @dulek did you have a chance to check the logs I provided?
I'm currently on holiday, will reply when I'm back.
@dulek did you have a chance to check?
@okozachenko1203
According to your logs, the secrets were created using the internal barbican endpoint and they're requested by OCCM using a public endpoint:
http://barbican-api.openstack.svc.cluster.local:9311/v1/secrets/bfc96ae2-2f11-4d98-9094-8a89d0fcdcf5
vs
https://key-manager.openstack.vistex.local/v1/secrets/bfc96ae2-2f11-4d98-9094-8a89d0fcdcf5
Can you try to use the keystone user configured in OCCM and check whether it can access http://barbican-api.openstack.svc.cluster.local:9311/v1/secrets/bfc96ae2-2f11-4d98-9094-8a89d0fcdcf5
and/or https://key-manager.openstack.vistex.local/v1/secrets/bfc96ae2-2f11-4d98-9094-8a89d0fcdcf5
?
$ openstack secret get %URL%
If there is a a permission issue, barbican would return 403. In your case you clearly get 404, which may indicate the reverse proxy or babrican webserver is configured with the wrong location/endpoint.
@okozachenko1203
According to your logs, the secrets were created using the internal barbican endpoint and they're requested by OCCM using a public endpoint:
http://barbican-api.openstack.svc.cluster.local:9311/v1/secrets/bfc96ae2-2f11-4d98-9094-8a89d0fcdcf5
vs
https://key-manager.openstack.vistex.local/v1/secrets/bfc96ae2-2f11-4d98-9094-8a89d0fcdcf5
Can you try to use the keystone user configured in OCCM and check whether it can access
http://barbican-api.openstack.svc.cluster.local:9311/v1/secrets/bfc96ae2-2f11-4d98-9094-8a89d0fcdcf5
and/orhttps://key-manager.openstack.vistex.local/v1/secrets/bfc96ae2-2f11-4d98-9094-8a89d0fcdcf5
?$ openstack secret get %URL%
If there is a a permission issue, barbican would return 403. In your case you clearly get 404, which may indicate the reverse proxy or babrican webserver is configured with the wrong location/endpoint.
@kayrus
So the problem is that the secret container was not created by octavia ingress controller.
I see, so the problem is that modern octavia requires container, not a secret? And the container is not created by the ingress controller, right? Looks related to the #2461
I see, so the problem is that modern octavia requires container, not a secret? And the container is not created by the ingress controller, right? Looks related to the #2461
Yeah, it is right that the secret container is not created by ingress controller. But I don't think that issue #2461 is related to that.
It is failing at this line https://github.com/kubernetes/cloud-provider-openstack/blob/a59b8a28d23b1f265eb066e760b56d72ad29e91f/pkg/ingress/controller/controller.go#L776 https://github.com/kubernetes/cloud-provider-openstack/blob/64b813046f25b41aa4295a0a51726bcf25e92bc7/pkg/ingress/controller/openstack/octavia.go#L352-L364
The secret created by ingress controller before EnsureListener is called. https://github.com/kubernetes/cloud-provider-openstack/blob/a59b8a28d23b1f265eb066e760b56d72ad29e91f/pkg/ingress/controller/controller.go#L751-L762
INFO [2024-04-25T17:23:26Z] secret created in Barbican ingress=elkstack/logstash-logstash lbID=d37899b3-f0df-4c80-bdf8-2386f89875fa secretName=kube_ingress_vstx1-useg-k8s-1_elkstack_logstash-logstash_logstash-tls secretRef="https://key-manager.openstack.vistex.local/v1/secrets/bfc96ae2-2f11-4d98-9094-8a89d0fcdcf5"
As you can see the secret ref is /v1/secrets/bfc96ae2-2f11-4d98-9094-8a89d0fcdcf5
and it is used as Listener createOpts. Now in octavia side, it calculated Containers uuid ref as containers/bfc96ae2-2f11-4d98-9094-8a89d0fcdcf5
which is not existing and tried to fetch it and finally failed.
So I wonder if that secret container creation should be triggered by ingress controller explicitly.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
/reopen /remove-lifecycle rotten
Hi folks, any update or thoughts here?
I figured out what the problem is. for my case, it turned out that the issue was not related to ccm. To summarize, octavia has exception logic for pkcs12 load which uses legacy barbican certManager instead of the default modern barbican certMananer implementation. The legacy one expectes the secret container which is not created in the modern barbican service. I had to backport some upstream patches to output the logs for such a fallback and fix the RC why pkcs12 load failure. Thanks.
can we close the issue?
can we close the issue?
well, not sure because this issue was raised by others originally
Let me attempt the steps @okozachenko1203 roughly outlined, more details would be helpful though :-)
@okozachenko1203 were you able to bypass the bug? If so, I would really appreciate a short explanation about how did you managed to do it.I cannot
Is this a BUG REPORT or FEATURE REQUEST?: /kind bug
What happened: The octavia-ingress-controller fails to create the listener when trying to enable tls encryption.
What you expected to happen: A listener with TLS termination to be created.
How to reproduce it: Follow the documentation on setting up the ingress controller. Here is my configuration for
octavia-ingress-controller
:Then follow: enable tls encryption.
Anything else we need to know?: HTTP ingress in this setup worked as expected.
Logs of the octavia-ingress-controller:
Note that I can retrieve the secret using the application credential, that is with
I can do this:
Environment: