Closed icf-schubes closed 2 years ago
- '--oidc-username-prefix=oidc:'
- '--oidc-groups-prefix=oidc:'
I'd remove these. Does kubectl
work?
I removed those options and the kube-apiserver pods were restarted, but have the same issue. I had the CLI working at one point but now I get the following:
kubectl get all -n openunison
Error from server (InternalError): an error on the server ("") has prevented the request from succeeding
logs from the orchestra container show the following around that time
[2021-08-02 16:11:33,748][Thread-14] WARN OpenShiftTarget - Unexpected result calling 'https://10.233.0.1:443/apis/openunison.tremolo.io/v1/namespaces/openunison/oidc-sessions/xab88e020-51d2-41e6-a8ed-39f91db7630dx' - 404 / {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"oidc-sessions.openunison.tremolo.io \"xab88e020-51d2-41e6-a8ed-39f91db7630dx\" not found","reason":"NotFound","details":{"name":"xab88e020-51d2-41e6-a8ed-39f91db7630dx","group":"openunison.tremolo.io","kind":"oidc-sessions"},"code":404}
[2021-08-02 16:11:33,780][Thread-14] WARN OpenShiftTarget - Unexpected result calling 'https://10.233.0.1:443/apis/openunison.tremolo.io/v1/namespaces/openunison/oidc-sessions/xf8ae0fe5-5dcf-4b44-8068-3c2ad4adb3bax' - 404 / {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"oidc-sessions.openunison.tremolo.io \"xf8ae0fe5-5dcf-4b44-8068-3c2ad4adb3bax\" not found","reason":"NotFound","details":{"name":"xf8ae0fe5-5dcf-4b44-8068-3c2ad4adb3bax","group":"openunison.tremolo.io","kind":"oidc-sessions"},"code":404}
[2021-08-02 16:11:33,811][Thread-14] WARN OpenShiftTarget - Unexpected result calling 'https://10.233.0.1:443/apis/openunison.tremolo.io/v1/namespaces/openunison/oidc-sessions/x10bd21fd-bad3-48ba-913f-e558e79df184x' - 404 / {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"oidc-sessions.openunison.tremolo.io \"x10bd21fd-bad3-48ba-913f-e558e79df184x\" not found","reason":"NotFound","details":{"name":"x10bd21fd-bad3-48ba-913f-e558e79df184x","group":"openunison.tremolo.io","kind":"oidc-sessions"},"code":404}
[2021-08-02 16:11:33,825][Thread-14] WARN OpenShiftTarget - Unexpected result calling 'https://10.233.0.1:443/apis/openunison.tremolo.io/v1/namespaces/openunison/oidc-sessions/xac4f503b-d8f1-4327-95a1-e288ad04bd24x' - 404 / {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"oidc-sessions.openunison.tremolo.io \"xac4f503b-d8f1-4327-95a1-e288ad04bd24x\" not found","reason":"NotFound","details":{"name":"xac4f503b-d8f1-4327-95a1-e288ad04bd24x","group":"openunison.tremolo.io","kind":"oidc-sessions"},"code":404}
I have an assumption that is may be cert related. I worked through the original cert issue with your help here: https://github.com/OpenUnison/openunison-k8s-login-oidc/issues/43 but definitely seem like it still could be something with the cert.
Can you run kubectl get all -n openunison --v=11
? I'd like to see which URL is failing
Running that command led me to a problem with my f5 virtual server setup for the cluster. I remedied that and now the CLI works as it should. The problem with the dashboard being unauthorized still persists though.
Try changing image
in your values.yaml file to docker.io/tremolosecurity/betas:oidc-1.0.23-1
and once it's redeployed access the dashboard again
closing due to inactivity
Working through this setup, I am able to log in via Okta and access the cluster CLI. However, when trying to access the dashbord I only see the alarm bell with the Unauthorized error. I have gone through the steps in the documentation for adding the OIDC vars to /etc/kubernetes/manifests/kube-apiserver.yaml here
here is what I see in the logs of the Dashboard pod (just a snippet, a lot more of the same)
and here are the corresponding logs from the kube-apiserver pod:
Any ideas of where to start looking to figure out the problem with the invalid bearer token?