IBM / ibm-storage-odf-console

ibm-storage-odf-console provides IBM storage specific console page, which will be loaded by ODF console when end users access IBM storage. It's specially designed for displaying IBM specific storage attributes to customer. Current scope includes IBM flashsystem only.
Apache License 2.0
3 stars 7 forks source link

ODF install failed: deployment odf-operator-controller-manager not ready before timeout #42

Closed TymoT closed 2 years ago

TymoT commented 2 years ago

The ODF installation fails with the following error: install failed: deployment odf-operator-controller-manager not ready before timeout: deployment "odf-operator-controller-manager" exceeded its progress deadline

 

image

Events:

Events:
  Type     Reason          Age                  From               Message
  ----     ------          ----                 ----               -------
  Normal   Scheduled       70m                  default-scheduler  Successfully assigned openshift-storage/odf-operator-controller-manager-78f6c8549c-82vkx to mc165
  Normal   AddedInterface  70m                  multus             Add eth0 [10.128.2.48/23] from openshift-sdn
  Normal   Pulled          70m                  kubelet            Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.7.0" already present on machine
  Normal   Created         70m                  kubelet            Created container kube-rbac-proxy
  Normal   Started         70m                  kubelet            Started container kube-rbac-proxy
  Normal   Pulled          70m                  kubelet            Successfully pulled image "quay.io/ocs-dev/odf-operator:latest" in 3.143084568s
  Normal   Pulled          70m                  kubelet            Successfully pulled image "quay.io/ocs-dev/odf-operator:latest" in 3.04331338s
  Normal   Pulled          70m                  kubelet            Successfully pulled image "quay.io/ocs-dev/odf-operator:latest" in 3.132081795s
  Normal   Pulling         69m (x4 over 70m)    kubelet            Pulling image "quay.io/ocs-dev/odf-operator:latest"
  Normal   Created         69m (x4 over 70m)    kubelet            Created container manager
  Normal   Started         69m (x4 over 70m)    kubelet            Started container manager
  Normal   Pulled          69m                  kubelet            Successfully pulled image "quay.io/ocs-dev/odf-operator:latest" in 3.034333266s
  Warning  BackOff         15s (x347 over 70m)  kubelet            Back-off restarting failed container

Logs:

sh-4.4# oc logs -n openshift-storage odf-operator-controller-manager-78f6c8549c-82vkx    -c manager
flag provided but not defined: -ibm-console-port
Usage of /manager:
  -health-probe-bind-address string
        The address the probe endpoint binds to. (default ":8082")
  -kubeconfig string
        Paths to a kubeconfig. Only required if out-of-cluster.
  -leader-elect
        Enable leader election for controller manager. Enabling this will ensure there is only one active controller manager.
  -metrics-bind-address string
        The address the metric endpoint binds to. (default ":8085")
  -odf-console-port int
        The port where the ODF console server will be serving it's payload (default 9001)
  -zap-devel
        Development Mode defaults(encoder=consoleEncoder,logLevel=Debug,stackTraceLevel=Warn). Production Mode defaults(encoder=jsonEncoder,logLevel=Info,stackTraceLevel=Error) (default true)
  -zap-encoder value
        Zap log encoding (one of 'json' or 'console')
  -zap-log-level value
        Zap Level to configure the verbosity of logging. Can be one of 'debug', 'info', 'error', or any integer value > 0 which corresponds to custom debug levels of increasing verbosity
  -zap-stacktrace-level value
        Zap Level at and above which stacktraces are captured (one of 'info', 'error', 'panic').

  The pod has it defined -ibm-console-port defined:

oc get pod odf-operator-controller-manager-78f6c8549c-82vkx -o yaml | grep ibm-console-port

shdn-ibm commented 2 years ago

checking with odf-operator developer about this issue.

iamniting commented 2 years ago

@TymoT I think you are using a very old build of odf-operator, Can you describe the deployment steps? Then only I will be able to help you.

TymoT commented 2 years ago

installed the newest version, but still the same:

Events:
  Type     Reason          Age                  From               Message
  ----     ------          ----                 ----               -------
  Normal   Scheduled       26m                  default-scheduler  Successfully assigned openshift-storage/odf-operator-controller-manager-78f6c8549c-jkbl8 to mc165
  Normal   AddedInterface  26m                  multus             Add eth0 [10.128.2.139/23] from openshift-sdn
  Normal   Pulled          26m                  kubelet            Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.7.0" already present on machine
  Normal   Created         26m                  kubelet            Created container kube-rbac-proxy
  Normal   Started         26m                  kubelet            Started container kube-rbac-proxy
  Normal   Pulled          26m                  kubelet            Successfully pulled image "quay.io/ocs-dev/odf-operator:latest" in 2.893158555s
  Normal   Pulled          26m                  kubelet            Successfully pulled image "quay.io/ocs-dev/odf-operator:latest" in 2.934727865s
  Normal   Started         25m (x3 over 26m)    kubelet            Started container manager
  Normal   Pulled          25m                  kubelet            Successfully pulled image "quay.io/ocs-dev/odf-operator:latest" in 2.954911913s
  Normal   Pulling         25m (x4 over 26m)    kubelet            Pulling image "quay.io/ocs-dev/odf-operator:latest"
  Normal   Created         25m (x4 over 26m)    kubelet            Created container manager
  Normal   Pulled          25m                  kubelet            Successfully pulled image "quay.io/ocs-dev/odf-operator:latest" in 3.067412583s
  Warning  BackOff         66s (x126 over 25m)  kubelet            Back-off restarting failed container
oc logs odf-operator-controller-manager-78f6c8549c-jkbl8    -c manager
flag provided but not defined: -ibm-console-port
Usage of /manager:
  -health-probe-bind-address string
        The address the probe endpoint binds to. (default ":8082")
  -kubeconfig string
        Paths to a kubeconfig. Only required if out-of-cluster.
  -leader-elect
        Enable leader election for controller manager. Enabling this will ensure there is only one active controller manager.
  -metrics-bind-address string
        The address the metric endpoint binds to. (default ":8085")
  -odf-console-port int
        The port where the ODF console server will be serving it's payload (default 9001)
  -zap-devel
        Development Mode defaults(encoder=consoleEncoder,logLevel=Debug,stackTraceLevel=Warn). Production Mode defaults(encoder=jsonEncoder,logLevel=Info,stackTraceLevel=Error) (default true)
  -zap-encoder value
        Zap log encoding (one of 'json' or 'console')
  -zap-log-level value
        Zap Level to configure the verbosity of logging. Can be one of 'debug', 'info', 'error', or any integer value > 0 which corresponds to custom debug levels of increasing verbosity
  -zap-stacktrace-level value
        Zap Level at and above which stacktraces are captured (one of 'info', 'error', 'panic').
TymoT commented 2 years ago

Solved

As @iamniting suggested the odf-operator build was too old.

Created a new clone of:

https://github.com/red-hat-storage/odf-operator#deploying-development-builds

And followed the instructions:

  1. https://github.com/red-hat-storage/odf-operator#deploying-development-builds
  2. https://github.com/red-hat-storage/odf-operator#installation

odf-operator-controller-manager started without problems