OT-CONTAINER-KIT / redis-operator

A golang based redis operator that will make/oversee Redis standalone/cluster/replication/sentinel mode setup on top of the Kubernetes.
https://ot-redis-operator.netlify.app/
Apache License 2.0
790 stars 217 forks source link

Creating cluster stuck with error #628

Open diptripa opened 1 year ago

diptripa commented 1 year ago

What version of redis operator are you using? 0.15

{"level":"info","ts":1694771678.3012888,"logger":"controllers.RedisCluster","msg":"Reconciling opstree redis Cluster controller","Request.Namespace":"client-admin-rbac","Request.Name":"redis-cluster"}
{"level":"error","ts":1694771678.3082063,"logger":"controller_redis","msg":"Error in getting redis pod IP","Request.RedisManager.Namespace":"client-admin-rbac","Request.RedisManager.Name":"redis-cluster-leader-0","error":"pods \"redis-cluster-leader-0\" not found","stacktrace":"github.com/OT-CONTAINER-KIT/redis-operator/k8sutils.configureRedisClient\n\t/workspace/k8sutils/redis.go:345\ngithub.com/OT-CONTAINER-KIT/redis-operator/k8sutils.checkRedisCluster\n\t/workspace/k8sutils/redis.go:204\ngithub.com/OT-CONTAINER-KIT/redis-operator/k8sutils.CheckRedisNodeCount\n\t/workspace/k8sutils/redis.go:283\ngithub.com/OT-CONTAINER-KIT/redis-operator/controllers.(*RedisClusterReconciler).Reconcile\n\t/workspace/controllers/rediscluster_controller.go:74\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:114\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:311\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:227"}
{"level":"info","ts":1694771678.3084183,"logger":"controller_redis","msg":"Successfully got the ip for redis","Request.RedisManager.Namespace":"client-admin-rbac","Request.RedisManager.Name":"redis-cluster-leader-0","ip":""}
{"level":"error","ts":1694771678.3087418,"logger":"controller_redis","msg":"Redis command failed with this error","Request.RedisManager.Namespace":"client-admin-rbac","Request.RedisManager.Name":"redis-cluster","error":"dial tcp :6379: connect: connection refused","stacktrace":"github.com/OT-CONTAINER-KIT/redis-operator/k8sutils.CheckRedisNodeCount\n\t/workspace/k8sutils/redis.go:283\ngithub.com/OT-CONTAINER-KIT/redis-operator/controllers.(*RedisClusterReconciler).Reconcile\n\t/workspace/controllers/rediscluster_controller.go:74\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:114\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:311\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:227"}
{"level":"error","ts":1694771678.3088276,"logger":"controller_redis","msg":"Redis command failed with this error","Request.RedisManager.Namespace":"client-admin-rbac","Request.RedisManager.Name":"redis-cluster","error":"dial tcp :6379: connect: connection refused","stacktrace":"github.com/OT-CONTAINER-KIT/redis-operator/k8sutils.CheckRedisNodeCount\n\t/workspace/k8sutils/redis.go:283\ngithub.com/OT-CONTAINER-KIT/redis-operator/controllers.(*RedisClusterReconciler).Reconcile\n\t/workspace/controllers/rediscluster_controller.go:74\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:114\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:311\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:227"}

redis-operator version: 0.15

Does this issue reproduce with the latest release? It is on the latest release

What operating system and processor architecture are you using (kubectl version)?

kubectl version Output
$ kubectl version 1.25.11

What did you do?

  1. Created Redis cluster with manifest mentioned here
  2. First error I got while applying the manifest was Error from server (BadRequest): error when creating "Documents/POC/redis-operator/redis-cluster.yml": RedisCluster in version "v1beta1" cannot be handled as a RedisCluster: strict decoding error: unknown field "spec.securityContext"
  3. Removed the securityContext section and ran again. This time the stateful set was created but had this error and no pods showed up. create Claim node-conf-redis-cluster-leader-0 for Pod redis-cluster-leader-0 in StatefulSet redis-cluster-leader failed error: PersistentVolumeClaim "node-conf-redis-cluster-leader-0" is invalid: spec.resources[storage]: Required value
  4. Checked PVC Claim and the manifests looks like this.
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: redis-cluster-leader-redis-cluster-leader-0
    namespace: whatever
    labels:
    app: redis-cluster-leader
    redis_setup_type: cluster
    role: leader
    annotations:
    pv.kubernetes.io/bind-completed: 'yes'
    pv.kubernetes.io/bound-by-controller: 'yes'
    redis.opstreelabs.in: 'true'
    redis.opstreelabs.instance: redis-cluster
    volume.beta.kubernetes.io/storage-provisioner: csi.trident.netapp.io
    volume.kubernetes.io/storage-provisioner: csi.trident.netapp.io
    spec:
    accessModes:
    - ReadWriteOnce
    resources:
    requests:
      storage: 1Gi
    volumeName: pvc-whatever
    storageClassName: basic-nfs-rwx-delete
    volumeMode: Filesystem

What did you expect to see?

Leader and follower pods to spin up.

What did you see instead? Error Error from server (BadRequest): error when creating "Documents/POC/redis-operator/redis-cluster.yml": RedisCluster in version "v1beta1" cannot be handled as a RedisCluster: strict decoding error: unknown field "spec.securityContext"

Richard87 commented 1 year ago

Hi! I had the same error and had to replace securityContext with podSecurityContext

Liamb17 commented 12 months ago

I'm also facing the issue since updating the operator version:

Warning FailedCreate 3s (x12 over 13s) statefulset-controller create Claim node-conf-redis-leader-0 for Pod redis-leader-0 in StatefulSet redis-leader failed error: PersistentVolumeClaim "node-conf-redis-leader-0" is invalid: spec.resources[storage]: Required value Warning FailedCreate 3s (x12 over 13s) statefulset-controller create Pod redis-leader-0 in StatefulSet redis-leader failed error: failed to create PVC node-conf-redis-leader-0: PersistentVolumeClaim "node-conf-redis-leader-0" is invalid: spec.resources[storage]: Required value

Any suggestions?