openshift / hypershift

Hyperscale OpenShift - clusters with hosted control planes
https://hypershift-docs.netlify.app
Apache License 2.0
434 stars 319 forks source link

Not all created cluster components have access to the pull secret #589

Closed rmohr closed 2 years ago

rmohr commented 3 years ago

When executing hypershift on kubernetes (with #584 and #585 applied) the cluster is deployed successfully, but not all components have access to the provided pull secrets and therefore some images can't be accessed:

$ kubectl.sh get pods --all-namespaces | grep ImagePull
selecting docker as container runtime
clusters-example   certified-operators-catalog-f7cf6b5bd-bvcn4             0/1     ImagePullBackOff   0          14m
clusters-example   community-operators-catalog-65c777fdf8-wskhq            0/1     ImagePullBackOff   0          14m
clusters-example   redhat-marketplace-operators-catalog-85fd8b4fcd-qhbx4   0/1     ImagePullBackOff   0          14m

Applying this patch resolves that:

kubectl patch serviceaccount default -n clusters-example -p '{"imagePullSecrets": [{"name": "pull-secret"}]}'

I guess that in the case of CI, the components can come up because the default serviceaccounts in the namespaces can pull the images in questions there always.

However, once they can be pulled I get:

  Warning  Failed     37s (x2 over 54s)  kubelet            Failed to pull image "registry.redhat.io/redhat/community-operator-index:v4.9": rpc error: code = Unknown desc = Source image rejected: Invalid GPG signature: gpgme.Signature{Summary:128, Fingerprint:"1AC4971355A34A82", Status:gpgme.Error{err:0x9}, Timestamp:time.Time{wall:0x0, ext:63770486629, loc:(*time.Location)(0x55aa7ca05be0)}, ExpTimestamp:time.Time{wall:0x0, ext:62135596800, loc:(*time.Location)(0x55aa7ca05be0)}, WrongKeyUsage:false, PKATrust:0x0, ChainModel:false, Validity:0, ValidityReason:error(nil), PubkeyAlgo:1, HashAlgo:8}

All other images are fine:

clusters-example   catalog-operator-5cbb57c78c-wsx62                       2/2     Running            0          19m
clusters-example   certified-operators-catalog-f7cf6b5bd-t4jms             0/1     ImagePullBackOff   0          113s
clusters-example   cluster-api-fd66f7969-xtcl9                             1/1     Running            0          20m
clusters-example   cluster-autoscaler-fcdddc464-hrjdj                      1/1     Running            0          19m
clusters-example   cluster-policy-controller-65d5774c98-jl5dt              1/1     Running            0          19m
clusters-example   cluster-version-operator-5867df6bb4-jck9w               1/1     Running            0          19m
clusters-example   community-operators-catalog-65c777fdf8-hq5sz            0/1     ImagePullBackOff   0          113s
clusters-example   control-plane-operator-6f77887f9c-78zks                 1/1     Running            0          20m
clusters-example   etcd-kvzhkbqxnw                                         1/1     Running            0          18m
clusters-example   etcd-operator-5449d4dc78-6bzsp                          1/1     Running            0          19m
clusters-example   hosted-cluster-config-operator-6b9fc4b469-nmptv         1/1     Running            0          19m
clusters-example   ignition-server-9844bf67c-c77dx                         1/1     Running            0          20m
clusters-example   konnectivity-agent-78f9b65c96-n6lmp                     1/1     Running            0          19m
clusters-example   konnectivity-server-dc7d7fd8d-gvpld                     1/1     Running            0          19m
clusters-example   kube-apiserver-78579459f4-wvx5p                         2/2     Running            0          19m
clusters-example   kube-controller-manager-78f9c977f5-xctgg                1/1     Running            0          19m
clusters-example   kube-scheduler-78c4589d5f-scfvn                         1/1     Running            0          19m
clusters-example   machine-approver-657654b658-pxpzz                       1/1     Running            0          19m
clusters-example   manifests-bootstrapper                                  0/1     Completed          6          19m
clusters-example   oauth-openshift-69c8b769b6-n9pjn                        1/1     Running            0          9m13s
clusters-example   olm-operator-f8859749d-j4n8p                            2/2     Running            0          19m
clusters-example   openshift-apiserver-5ff56d95cc-8678t                    2/2     Running            0          9m12s
clusters-example   openshift-controller-manager-76cbf5fbf5-gxq64           1/1     Running            0          19m
clusters-example   openshift-oauth-apiserver-789565c889-h4bpr              1/1     Running            0          19m
clusters-example   packageserver-ff9fc7f4f-9w47j                           2/2     Running            0          19m
clusters-example   redhat-marketplace-operators-catalog-85fd8b4fcd-jhsqz   0/1     ImagePullBackOff   0          113s
openshift-bot commented 2 years ago

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close. Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

openshift-bot commented 2 years ago

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity. Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten /remove-lifecycle stale

openshift-bot commented 2 years ago

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen. Mark the issue as fresh by commenting /remove-lifecycle rotten. Exclude this issue from closing again by commenting /lifecycle frozen.

/close

openshift-ci[bot] commented 2 years ago

@openshift-bot: Closing this issue.

In response to [this](https://github.com/openshift/hypershift/issues/589#issuecomment-1074193542): >Rotten issues close after 30d of inactivity. > >Reopen the issue by commenting `/reopen`. >Mark the issue as fresh by commenting `/remove-lifecycle rotten`. >Exclude this issue from closing again by commenting `/lifecycle frozen`. > >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.