Closed karmab closed 5 years ago
I am seeing the same issue on my environment(set up with install-scripts):
[cloud-user@rhhi-node-worker-0 ~]$ oc -n openshift-storage describe pods/local-storage-operator-5c64497db9-27428
Name: local-storage-operator-5c64497db9-27428
Namespace: openshift-storage
Priority: 0
PriorityClassName: <none>
Node: rhhi-node-master-2/192.168.123.106
Start Time: Tue, 10 Sep 2019 16:36:35 +0000
Labels: name=local-storage-operator
pod-template-hash=5c64497db9
Annotations: alm-examples:
[
{
"apiVersion": "local.storage.openshift.io/v1",
"kind": "LocalVolume",
"metadata": {
"name": "example"
},
"spec": {
"storageClassDevices": [
{
"devicePaths": [
"/dev/vde",
"/dev/vdf"
],
"fsType": "ext4",
"storageClassName": "foobar",
"volumeMode": "Filesystem"
}
]
}
}
]
capabilities: Full Lifecycle
categories: Storage
containerImage: quay.io/openshift/origin-local-storage-operator:4.2.0
createdAt: 2019-08-14T00:00:00Z
description: Configure and use local storage volumes in kubernetes and Openshift
olm.operatorGroup: openshift-storage-operatorgroup
olm.operatorNamespace: openshift-storage
olm.targetNamespaces: openshift-storage
openshift.io/scc: restricted
repository: https://github.com/openshift/local-storage-operator
support: Red Hat
Status: Pending
IP: 10.129.0.91
Controlled By: ReplicaSet/local-storage-operator-5c64497db9
Containers:
local-storage-operator:
Container ID:
Image: image-registry.openshift-image-registry.svc:5000/openshift/ose-local-storage-operator:v4.2.0-201909091819
Image ID:
Port: 60000/TCP
Host Port: 0/TCP
Command:
local-storage-operator
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
WATCH_NAMESPACE: openshift-storage (v1:metadata.namespace)
OPERATOR_NAME: local-storage-operator
PROVISIONER_IMAGE: image-registry.openshift-image-registry.svc:5000/openshift/ose-local-storage-static-provisioner:v4.2.0-201909081401
DISKMAKER_IMAGE: image-registry.openshift-image-registry.svc:5000/openshift/ose-local-storage-diskmaker:v4.2.0-201909091819
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from local-storage-operator-token-2vmlm (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
local-storage-operator-token-2vmlm:
Type: Secret (a volume populated by a Secret)
SecretName: local-storage-operator-token-2vmlm
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m44s default-scheduler Successfully assigned openshift-storage/local-storage-operator-5c64497db9-27428 to rhhi-node-master-2
Normal Pulling 8m13s (x4 over 9m43s) kubelet, rhhi-node-master-2 Pulling image "image-registry.openshift-image-registry.svc:5000/openshift/ose-local-storage-operator:v4.2.0-201909091819"
Warning Failed 8m13s (x4 over 9m43s) kubelet, rhhi-node-master-2 Failed to pull image "image-registry.openshift-image-registry.svc:5000/openshift/ose-local-storage-operator:v4.2.0-201909091819": rpc error: code = Unknown desc = Error reading manifest v4.2.0-201909091819 in image-registry.openshift-image-registry.svc:5000/openshift/ose-local-storage-operator: name unknown
Warning Failed 8m13s (x4 over 9m43s) kubelet, rhhi-node-master-2 Error: ErrImagePull
Warning Failed 7m47s (x7 over 9m42s) kubelet, rhhi-node-master-2 Error: ImagePullBackOff
Normal BackOff 4m41s (x20 over 9m42s) kubelet, rhhi-node-master-2 Back-off pulling image "image-registry.openshift-image-registry.svc:5000/openshift/ose-local-storage-operator:v4.2.0-201909091819"
This is because local-storage-operator is not maintained by ocs-operator team. And, we are using their dev branch to unblock us.
Image References seems to be fine in current upstream/master.
When deploying ocs operator, local storage operator doesnt get properly deployed the deployment for the local storage operator contains several mentions of image-registry.openshift-image-registry.svc:5000/ which prevent the deployment to succeed editing the deployment to remove those elements makes the deployment succeed