skupperproject / skupper

Skupper is an implementation of a Virtual Application Network, enabling rich hybrid cloud communication.
http://skupper.io
Apache License 2.0
595 stars 74 forks source link

--console-auth=openshift fails without explanation on non-openshift clusters #1370

Open hash-d opened 10 months ago

hash-d commented 10 months ago

Describe the bug

skupper init with --console-auth=openshift fails to get skupper started in a non-OpenShift cluster, and no obvious reason for the failure can be seen.

How To Reproduce

Execute on a non-OpenShift cluster:

$ skupper init --enable-console --enable-flow-collector --console-auth=openshift
Waiting for LoadBalancer IP or hostname...
Waiting for status...
Skupper status is not loaded yet.
Skupper is now installed in namespace 'dh-oauth'.  Use 'skupper status' to get more information.
$ skupper status
Status pending...
$ k get pod
NAME                                          READY   STATUS              RESTARTS   AGE
skupper-prometheus-5df7c7f66f-hh6cj           1/1     Running             0          72s
skupper-router-8655964fb-x6jdp                2/2     Running             0          75s
skupper-service-controller-64849cc6f9-tqjbk   0/3     ContainerCreating   0          73s
$ k get event | grep Warn
2m18s       Warning   FailedMount         pod/skupper-router-8655964fb-x6jdp                 MountVolume.SetUp failed for volume "claims-cert" : secret "skupper-site-server" not found
2m19s       Warning   FailedMount         pod/skupper-router-8655964fb-x6jdp                 MountVolume.SetUp failed for volume "router-config" : configmap "skupper-internal" not found
2m18s       Warning   FailedMount         pod/skupper-router-8655964fb-x6jdp                 MountVolume.SetUp failed for volume "skupper-site-server" : secret "skupper-site-server" not found
2m10s       Warning   Unhealthy           pod/skupper-router-8655964fb-x6jdp                 Readiness probe failed: Get "http://172.17.0.6:9090/healthz": dial tcp 172.17.0.6:9090: connect: connection refused
2m7s        Warning   Unhealthy           pod/skupper-router-8655964fb-x6jdp                 Readiness probe failed: Get "http://172.17.0.6:9191/healthz": dial tcp 172.17.0.6:9191: connect: connection refused
10s         Warning   FailedMount         pod/skupper-service-controller-64849cc6f9-tqjbk    MountVolume.SetUp failed for volume "skupper-console-certs" : secret "skupper-console-certs" not found
15s         Warning   FailedMount         pod/skupper-service-controller-64849cc6f9-tqjbk    Unable to attach or mount volumes: unmounted volumes=[skupper-console-certs], unattached volumes=[skupper-local-client kube-api-access-96gng skupper-console-certs]: timed out waiting for the condition

Expected behavior

I'm not sure. One of:

Environment details

Additional context

hash-d commented 2 months ago

This is still true for 1.8.1:

$ skupper init --enable-flow-collector --enable-console --console-auth openshift
Waiting for LoadBalancer IP or hostname...
Waiting for status...
Skupper status is not loaded yet.
Skupper is now installed in namespace 'default'.  Use 'skupper status' to get more information.
$ k get pod 
NAME                                          READY   STATUS              RESTARTS   AGE
skupper-prometheus-5956497974-5j4md           1/1     Running             0          82s
skupper-router-5cdc76b5c5-2vzrr               2/2     Running             0          86s
skupper-service-controller-65d8bbdbcd-mrmmg   0/3     ContainerCreating   0          83s
$ skupper version
client version                 1.8.1-rh-1
transport version              x/y/service-interconnect-skupper-router-rhel9:2.7.1-1 (sha256:3e3cc571cfd9)
controller version             not-found
config-sync version            x/y/service-interconnect-config-sync-rhel9:1.8.1-1 (sha256:65fc88e0d018)
flow-collector version         not-found
$ skupper status
Status pending...
$
maaft commented 1 week ago

Have you found any solution to this?

Experiencing this in one of my k3s clusters. In other similiar clusters it works fine though.

hash-d commented 1 week ago

@maaft, since you're using k3s, you wouldn't have openshift authentication in the first place, as it depends on openshift components being deployed on the cluster. Just use another console auth option (such as internal), and you should be good.