openshift / cluster-authentication-operator

OpenShift operator for the top level Authentication and OAuth configs.
Apache License 2.0
46 stars 96 forks source link

4.2 Authentication Operator Fails on vSphere Install #223

Closed MattPOlson closed 3 years ago

MattPOlson commented 4 years ago

I'm setting up OpenShift 4.2 on vSphere, I have the cluster running and all the operators are running except the Authentication Operator. It's throwing the error below. I have 3 workers and two masters, and using HAProxy for my load balancer. I've verified it's configuration and all nodes are showing as healthy as available. I've also verified connectivity between all nodes. Details are below, please let me know if any more are needed.

  Conditions:
    Last Transition Time:  2019-12-05T15:21:50Z
    Reason:                AsExpected
    Status:                False
    Type:                  Degraded
    Last Transition Time:  2019-12-05T00:23:36Z
    Message:               Progressing: got '404 Not Found' status while trying to GET the OAuth well-known https://10.6.202.40:6443/.well-known/oauth-authorization-server endpoint data
    Reason:                ProgressingWellKnownNotReady
    Status:                True
    Type:                  Progressing
    Last Transition Time:  2019-12-05T00:23:36Z
    Reason:                Available
    Status:                False
    Type:                  Available
    Last Transition Time:  2019-12-05T00:23:36Z
    Reason:                AsExpected
    Status:                True
    Type:                  Upgradeabl

I'm running this command to complete the install

openshift-install wait-for install-complete
INFO Cluster operator authentication Progressing is True with ProgressingWellKnownNotReady: Progressing: got '404 Not Found' status while trying to GET the OAuth well-known https://10.6.202endpoint data
INFO Cluster operator authentication Available is False with Available:
INFO Cluster operator insights Disabled is False with :
FATAL failed to initialize the cluster: Cluster operator authentication is still updating
[root@JX2LUTL01 ~]# oc get co
NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE
authentication                                       False       True          False      4d23h
cloud-credential                           4.2.2     True        False         False      5d6h
cluster-autoscaler                         4.2.2     True        False         False      5d6h
console                                    4.2.2     True        False         False      3d8h
dns                                        4.2.2     True        False         False      3d8h
image-registry                             4.2.2     True        False         False      3d8h
ingress                                    4.2.2     True        False         False      3d8h
insights                                   4.2.2     True        False         False      5d6h
kube-apiserver                             4.2.2     True        False         False      5d6h
kube-controller-manager                    4.2.2     True        False         False      5d6h
kube-scheduler                             4.2.2     True        False         False      5d6h
machine-api                                4.2.2     True        False         False      5d6h
machine-config                             4.2.2     True        False         False      5d5h
marketplace                                4.2.2     True        False         False      4d8h
monitoring                                 4.2.2     True        False         False      3d8h
network                                    4.2.2     True        False         False      5d6h
node-tuning                                4.2.2     True        False         False      3d8h
openshift-apiserver                        4.2.2     True        False         False      3d8h
openshift-controller-manager               4.2.2     True        False         False      3d8h
openshift-samples                          4.2.2     True        False         False      5d6h
operator-lifecycle-manager                 4.2.2     True        False         False      5d6h
operator-lifecycle-manager-catalog         4.2.2     True        False         False      5d6h
operator-lifecycle-manager-packageserver   4.2.2     True        False         False      3d8h
service-ca                                 4.2.2     True        False         False      5d6h
service-catalog-apiserver                  4.2.2     True        False         False      5d6h
service-catalog-controller-manager         4.2.2     True        False         False      5d6h
storage                                    4.2.2     True        False         False      5d6h
[root@JX2LUTL01 ~]# oc get pods -n=openshift-authentication-operator
NAME                                       READY   STATUS    RESTARTS   AGE
authentication-operator-75ffd7fb6c-w85qx   1/1     Running   0          4h8m
[root@JX2LUTL01 ~]# oc get pods -n=openshift-authentication
NAME                               READY   STATUS    RESTARTS   AGE
oauth-openshift-56d9f65fd7-2nr4r   1/1     Running   0          4h8m
oauth-openshift-56d9f65fd7-dk2mh   1/1     Running   0          4h7m
oc describe co authentication
Name:         authentication
Namespace:
Labels:       <none>
Annotations:  <none>
API Version:  config.openshift.io/v1
Kind:         ClusterOperator
Metadata:
  Creation Timestamp:  2019-12-05T00:23:36Z
  Generation:          1
  Resource Version:    2075821
  Self Link:           /apis/config.openshift.io/v1/clusteroperators/authentication
  UID:                 7a6cb015-16f5-11ea-8114-005056a3b687
Spec:
Status:
  Conditions:
    Last Transition Time:  2019-12-09T19:36:33Z
    Reason:                AsExpected
    Status:                False
    Type:                  Degraded
    Last Transition Time:  2019-12-05T00:23:36Z
    Message:               Progressing: got '404 Not Found' status while trying to GET the OAuth well-known https://10.6.202.40:6443/.well-known/oauth-authorization-server endpoint data
    Reason:                ProgressingWellKnownNotReady
    Status:                True
    Type:                  Progressing
    Last Transition Time:  2019-12-05T00:23:36Z
    Reason:                Available
    Status:                False
    Type:                  Available
    Last Transition Time:  2019-12-05T00:23:36Z
    Reason:                AsExpected
    Status:                True
    Type:                  Upgradeable
  Extension:               <nil>
  Related Objects:
    Group:     operator.openshift.io
    Name:      cluster
    Resource:  authentications
    Group:     config.openshift.io
    Name:      cluster
    Resource:  authentications
    Group:     config.openshift.io
    Name:      cluster
    Resource:  infrastructures
    Group:     config.openshift.io
    Name:      cluster
    Resource:  oauths
    Group:
    Name:      openshift-config
    Resource:  namespaces
    Group:
    Name:      openshift-config-managed
    Resource:  namespaces
    Group:
    Name:      openshift-authentication
    Resource:  namespaces
    Group:
    Name:      openshift-authentication-operator
    Resource:  namespaces
Events:        <none>

This command works from all masters and workers

curl https://10.6.202.40:6443/.well-known/oauth-authorization-server -k
{
  "paths": [
    "/apis",
    "/metrics",
    "/version"
  ]
[root@JX2LUTL01 ~]# oc logs oauth-openshift-56d9f65fd7-2nr4r -n=openshift-authentication
Copying system trust bundle
I1209 19:38:36.928941       1 secure_serving.go:65] Forcing use of http/1.1 only
I1209 19:38:36.929030       1 secure_serving.go:127] Serving securely on 0.0.0.0:6443
[root@JX2LUTL01 ~]# oc logs authentication-operator-75ffd7fb6c-w85qx -n=openshift-authentication-operator
Copying system trust bundle
I1209 19:37:24.376262       1 observer_polling.go:116] Starting file observer
I1209 19:37:24.376264       1 cmd.go:188] Using service-serving-cert provided certificates
I1209 19:37:24.377067       1 observer_polling.go:116] Starting file observer
I1209 19:37:24.884709       1 secure_serving.go:116] Serving securely on 0.0.0.0:8443
I1209 19:37:24.885329       1 leaderelection.go:217] attempting to acquire leader lease  openshift-authentication-operator/cluster-authentication-operator-lock...
I1209 19:38:25.477171       1 leaderelection.go:227] successfully acquired lease openshift-authentication-operator/cluster-authentication-operator-lock
I1209 19:38:25.477300       1 event.go:209] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"openshift-authentication-operator", Name:"cluster-authentication-operator-lock", UID:"faa2572c-16bb-11ea-b2f3-005056a3e087", APIVersion:"v1", ResourceVersion:"2075606", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' 53698669-1abb-11ea-968e-0a580afe0254 became leader
I1209 19:38:25.479815       1 remove_stale_conditions.go:71] Starting RemoveStaleConditions
I1209 19:38:25.479910       1 status_controller.go:188] Starting StatusSyncer-authentication
I1209 19:38:25.480105       1 unsupportedconfigoverrides_controller.go:151] Starting UnsupportedConfigOverridesController
I1209 19:38:25.480117       1 logging_controller.go:82] Starting LogLevelController
I1209 19:38:25.480124       1 controller.go:204] Starting RouterCertsDomainValidationController
I1209 19:38:25.480130       1 management_state_controller.go:101] Starting management-state-controller-authentication
I1209 19:38:25.480261       1 controller.go:53] Starting AuthenticationOperator2
I1209 19:38:25.480921       1 resourcesync_controller.go:217] Starting ResourceSyncController
I1209 19:38:28.299980       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"309b7dbf-16bb-11ea-88fe-005056a3055e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/oauth-openshift -n openshift-authentication because it changed
E1209 19:38:54.527028       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: net/http: TLS handshake timeout
I1209 19:38:54.527633       1 status_controller.go:165] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2019-12-09T19:36:33Z","message":"RouteHealthDegraded: failed to GET route: net/http: TLS handshake timeout","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2019-12-05T00:23:36Z","message":"Progressing: got '404 Not Found' status while trying to GET the OAuth well-known https://10.6.202.40:6443/.well-known/oauth-authorization-server endpoint data","reason":"ProgressingWellKnownNotReady","status":"True","type":"Progressing"},{"lastTransitionTime":"2019-12-05T00:23:36Z","reason":"Available","status":"False","type":"Available"},{"lastTransitionTime":"2019-12-05T00:23:36Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}
I1209 19:38:54.532774       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"309b7dbf-16bb-11ea-88fe-005056a3055e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "" to "RouteHealthDegraded: failed to GET route: net/http: TLS handshake timeout"
I1209 19:38:55.951891       1 status_controller.go:165] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2019-12-09T19:36:33Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2019-12-05T00:23:36Z","message":"Progressing: got '404 Not Found' status while trying to GET the OAuth well-known https://10.6.202.40:6443/.well-known/oauth-authorization-server endpoint data","reason":"ProgressingWellKnownNotReady","status":"True","type":"Progressing"},{"lastTransitionTime":"2019-12-05T00:23:36Z","reason":"Available","status":"False","type":"Available"},{"lastTransitionTime":"2019-12-05T00:23:36Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}
I1209 19:38:55.956704       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"309b7dbf-16bb-11ea-88fe-005056a3055e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "RouteHealthDegraded: failed to GET route: net/http: TLS handshake timeout" to ""
W1209 19:44:15.503371       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2075606 (2077071)
W1209 19:46:22.589372       1 reflector.go:289] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: The resourceVersion for the provided watch is too old.
W1209 19:46:42.500749       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2075606 (2077744)
W1209 19:46:58.500759       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2075606 (2077814)
W1209 19:47:37.499418       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2075606 (2078011)
W1209 19:48:07.508717       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.Deployment ended with: too old resource version: 2075843 (2077090)
W1209 19:49:37.508003       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2077244 (2078512)
W1209 19:52:52.506131       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2077904 (2079390)
W1209 19:53:03.503265       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2078164 (2079439)
W1209 19:53:04.504946       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2077981 (2079441)
W1209 19:55:03.515775       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2078651 (2079949)
W1209 19:58:15.511349       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2079535 (2080770)
W1209 20:01:26.507378       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2079591 (2081614)
W1209 20:01:41.509589       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2079598 (2081689)
W1209 20:01:54.520611       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2080089 (2081752)
W1209 20:01:55.514431       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.Deployment ended with: too old resource version: 2078063 (2080300)
W1209 20:04:04.671306       1 reflector.go:289] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: The resourceVersion for the provided watch is too old.
W1209 20:05:22.516274       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2080936 (2082640)
W1209 20:08:14.511160       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2081774 (2083411)
W1209 20:10:30.517731       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2081836 (2083962)
W1209 20:11:46.525473       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2081901 (2084304)
W1209 20:12:28.521079       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2082779 (2084483)
W1209 20:16:13.514966       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2083551 (2085413)
W1209 20:17:04.538829       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2084099 (2085655)
W1209 20:17:16.721612       1 reflector.go:289] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: The resourceVersion for the provided watch is too old.
W1209 20:17:48.525928       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2084623 (2085851)
W1209 20:19:33.519985       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.Deployment ended with: too old resource version: 2081900 (2083208)
W1209 20:20:10.530486       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2084455 (2086434)
W1209 20:24:02.530452       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2085995 (2087464)
W1209 20:24:13.518678       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2085580 (2087511)
W1209 20:25:48.544517       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2085817 (2087921)
W1209 20:26:55.535049       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2086577 (2088235)
W1209 20:27:58.523566       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.Deployment ended with: too old resource version: 2085764 (2086828)
W1209 20:30:17.536340       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2087607 (2089082)
W1209 20:31:44.803184       1 reflector.go:289] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: The resourceVersion for the provided watch is too old.
[root@JX2LUTL01 ~]# oc describe pod authentication-operator-75ffd7fb6c-w85qx -n=openshift-authentication-operator
 State:          Running
      Started:      Mon, 09 Dec 2019 14:37:24 -0500
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:     10m
      memory:  50Mi
    Environment:
      IMAGE:                   quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3e00a6a92d9443240151e5eaab44cc24a25d8fcb833c29404552494064923cb
      OPERATOR_IMAGE_VERSION:  4.2.2
      OPERAND_IMAGE_VERSION:   4.2.2_openshift
      POD_NAME:                authentication-operator-75ffd7fb6c-w85qx (v1:metadata.name)
    Mounts:
      /var/run/configmaps/config from config (rw)
      /var/run/configmaps/trusted-ca-bundle from trusted-ca-bundle (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from authentication-operator-token-7d6z8 (ro)
      /var/run/secrets/serving-cert from serving-cert (rw)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      authentication-operator-config
    Optional:  false
  trusted-ca-bundle:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      trusted-ca-bundle
    Optional:  true
  serving-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  serving-cert
    Optional:    true
  authentication-operator-token-7d6z8:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  authentication-operator-token-7d6z8
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  node-role.kubernetes.io/master=
Tolerations:     node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 120s
                 node.kubernetes.io/unreachable:NoExecute for 120s
Events:          <none>
[root@JX2LUTL01 ~]# oc describe pods oauth-openshift-56d9f65fd7-2nr4r -n=openshift-authentication
Name:                 oauth-openshift-56d9f65fd7-2nr4r
Namespace:            openshift-authentication
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Node:                 master2.openshift.bdlab.local/10.6.202.38
Start Time:           Mon, 09 Dec 2019 14:38:28 -0500
Labels:               app=oauth-openshift
                      pod-template-hash=56d9f65fd7
Annotations:          k8s.v1.cni.cncf.io/networks-status:
                        [{
                            "name": "openshift-sdn",
                            "interface": "eth0",
                            "ips": [
                                "10.254.1.216"
                            ],
                            "default": true,
                            "dns": {}
                        }]
                      openshift.io/scc: anyuid
                      operator.openshift.io/pull-spec:
                        quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3e00a6a92d9443240151e5eaab44cc24a25d8fcb833c29404552494064923cb
                      operator.openshift.io/rvs-hash: XxhPdZ7p7TZEWwxM-1X9YUaDpKJP34ykFlYB0NaBw3yySP7D1Up7d9YE2XUk7jccIPtaiRiyd2vcq76iUyR96g
Status:               Running
IP:                   10.254.1.216
IPs:                  <none>
Controlled By:        ReplicaSet/oauth-openshift-56d9f65fd7
Containers:
  oauth-openshift:
    Container ID:  cri-o://7aee822a5e2855b017559ba74f8221556e78aa7d242adab2b573dfc3a2cb20cd
    Image:         quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3e00a6a92d9443240151e5eaab44cc24a25d8fcb833c29404552494064923cb
    Image ID:      quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3e00a6a92d9443240151e5eaab44cc24a25d8fcb833c29404552494064923cb
    Port:          6443/TCP
    Host Port:     0/TCP
    Command:
      /bin/bash
      -ec
    Args:

      if [ -s /var/config/system/configmaps/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt ]; then
          echo "Copying system trust bundle"
          cp -f /var/config/system/configmaps/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
      fi
      exec oauth-server osinserver --config=/var/config/system/configmaps/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig --v=2

    State:          Running
      Started:      Mon, 09 Dec 2019 14:38:36 -0500
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        10m
      memory:     50Mi
    Liveness:     http-get https://:6443/healthz delay=30s timeout=1s period=10s #success=1 #failure=3
    Readiness:    http-get https://:6443/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /var/config/system/configmaps/v4-0-config-system-cliconfig from v4-0-config-system-cliconfig (ro)
      /var/config/system/configmaps/v4-0-config-system-service-ca from v4-0-config-system-service-ca (ro)
      /var/config/system/configmaps/v4-0-config-system-trusted-ca-bundle from v4-0-config-system-trusted-ca-bundle (ro)
      /var/config/system/secrets/v4-0-config-system-ocp-branding-template from v4-0-config-system-ocp-branding-template (ro)
      /var/config/system/secrets/v4-0-config-system-router-certs from v4-0-config-system-router-certs (ro)
      /var/config/system/secrets/v4-0-config-system-serving-cert from v4-0-config-system-serving-cert (ro)
      /var/config/system/secrets/v4-0-config-system-session from v4-0-config-system-session (ro)
      /var/config/user/idp/0/secret/v4-0-config-user-idp-0-file-data from v4-0-config-user-idp-0-file-data (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from oauth-openshift-token-r7dmh (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  v4-0-config-system-session:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  v4-0-config-system-session
    Optional:    true
  v4-0-config-system-cliconfig:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      v4-0-config-system-cliconfig
    Optional:  true
  v4-0-config-system-serving-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  v4-0-config-system-serving-cert
    Optional:    true
  v4-0-config-system-service-ca:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      v4-0-config-system-service-ca
    Optional:  true
  v4-0-config-system-router-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  v4-0-config-system-router-certs
    Optional:    true
  v4-0-config-system-ocp-branding-template:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  v4-0-config-system-ocp-branding-template
    Optional:    true
  v4-0-config-system-trusted-ca-bundle:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      v4-0-config-system-trusted-ca-bundle
    Optional:  true
  v4-0-config-user-idp-0-file-data:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  v4-0-config-user-idp-0-file-data
    Optional:    false
  oauth-openshift-token-r7dmh:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  oauth-openshift-token-r7dmh
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  node-role.kubernetes.io/master=
Tolerations:     node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 120s
                 node.kubernetes.io/unreachable:NoExecute for 120s
Events:          <none
maraku516 commented 4 years ago

I have the same issue, any update on this?

gfvirga commented 4 years ago

@maraku516 , have you tried setting up any authentication methods like the htpasswd or LDAP? The cluster's I have installed only marks AVAILABLE as true only after I setup a form of authentication other than the initial kubeadmin. https://docs.openshift.com/container-platform/4.3/authentication/identity_providers/configuring-htpasswd-identity-provider.html

stlaz commented 4 years ago

https://10.6.202.40:6443/.well-known/oauth-authorization-server is served by the kube-apiserver, looks like there might be trouble rolling out the latest KAS config

openshift-bot commented 4 years ago

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close. Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

openshift-bot commented 3 years ago

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity. Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten /remove-lifecycle stale

stlaz commented 3 years ago

/close

openshift-ci-robot commented 3 years ago

@stlaz: Closing this issue.

In response to [this](https://github.com/openshift/cluster-authentication-operator/issues/223#issuecomment-739787740): >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.