Closed TymoT closed 2 years ago
checking with odf-operator developer about this issue.
@TymoT I think you are using a very old build of odf-operator, Can you describe the deployment steps? Then only I will be able to help you.
installed the newest version, but still the same:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 26m default-scheduler Successfully assigned openshift-storage/odf-operator-controller-manager-78f6c8549c-jkbl8 to mc165
Normal AddedInterface 26m multus Add eth0 [10.128.2.139/23] from openshift-sdn
Normal Pulled 26m kubelet Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.7.0" already present on machine
Normal Created 26m kubelet Created container kube-rbac-proxy
Normal Started 26m kubelet Started container kube-rbac-proxy
Normal Pulled 26m kubelet Successfully pulled image "quay.io/ocs-dev/odf-operator:latest" in 2.893158555s
Normal Pulled 26m kubelet Successfully pulled image "quay.io/ocs-dev/odf-operator:latest" in 2.934727865s
Normal Started 25m (x3 over 26m) kubelet Started container manager
Normal Pulled 25m kubelet Successfully pulled image "quay.io/ocs-dev/odf-operator:latest" in 2.954911913s
Normal Pulling 25m (x4 over 26m) kubelet Pulling image "quay.io/ocs-dev/odf-operator:latest"
Normal Created 25m (x4 over 26m) kubelet Created container manager
Normal Pulled 25m kubelet Successfully pulled image "quay.io/ocs-dev/odf-operator:latest" in 3.067412583s
Warning BackOff 66s (x126 over 25m) kubelet Back-off restarting failed container
oc logs odf-operator-controller-manager-78f6c8549c-jkbl8 -c manager
flag provided but not defined: -ibm-console-port
Usage of /manager:
-health-probe-bind-address string
The address the probe endpoint binds to. (default ":8082")
-kubeconfig string
Paths to a kubeconfig. Only required if out-of-cluster.
-leader-elect
Enable leader election for controller manager. Enabling this will ensure there is only one active controller manager.
-metrics-bind-address string
The address the metric endpoint binds to. (default ":8085")
-odf-console-port int
The port where the ODF console server will be serving it's payload (default 9001)
-zap-devel
Development Mode defaults(encoder=consoleEncoder,logLevel=Debug,stackTraceLevel=Warn). Production Mode defaults(encoder=jsonEncoder,logLevel=Info,stackTraceLevel=Error) (default true)
-zap-encoder value
Zap log encoding (one of 'json' or 'console')
-zap-log-level value
Zap Level to configure the verbosity of logging. Can be one of 'debug', 'info', 'error', or any integer value > 0 which corresponds to custom debug levels of increasing verbosity
-zap-stacktrace-level value
Zap Level at and above which stacktraces are captured (one of 'info', 'error', 'panic').
Solved
As @iamniting suggested the odf-operator build was too old.
Created a new clone of:
https://github.com/red-hat-storage/odf-operator#deploying-development-builds
And followed the instructions:
odf-operator-controller-manager started without problems
The ODF installation fails with the following error: install failed: deployment odf-operator-controller-manager not ready before timeout: deployment "odf-operator-controller-manager" exceeded its progress deadline
Events:
Logs:
The pod has it defined -ibm-console-port defined:
oc get pod odf-operator-controller-manager-78f6c8549c-82vkx -o yaml | grep ibm-console-port