helm / charts

⚠️(OBSOLETE) Curated applications for Kubernetes
Apache License 2.0
15.49k stars 16.8k forks source link

redis-ha not working when Persistence is enabled #2539

Closed esvirskiy closed 6 years ago

esvirskiy commented 7 years ago

Is this a request for help?: Yes!

Is this a BUG REPORT or FEATURE REQUEST? (choose one): Bug Report

Version of Helm and Kubernetes: Helm: Client: &version.Version{SemVer:"v2.6.2", GitCommit:"be3ae4ea91b2960be98c07e8f73754e67e87963c", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.6.2", GitCommit:"be3ae4ea91b2960be98c07e8f73754e67e87963c", GitTreeState:"clean"}

Kubernetes: Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T09:14:02Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.4", GitCommit:"793658f2d7ca7f064d2bdf606519f9fe1229c381", GitTreeState:"clean", BuildDate:"2017-08-17T08:30:51Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

Which chart: stable/redis-ha

What happened: Unable to get things to work when Persistent Volume is enabled.

helm install --set replicas.master=1 --set replicas.slave=3 --set persistentVolume.enabled=true stable/redis-ha
NAME:   moldy-peacock
LAST DEPLOYED: Fri Oct 20 15:49:48 2017
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/PersistentVolumeClaim
NAME                        STATUS   VOLUME  CAPACITY  ACCESSMODES  STORAGECLASS  AGE
moldy-peacock-redis-ha-pvc  Pending  gp2     1s

==> v1/Service
NAME                    CLUSTER-IP      EXTERNAL-IP  PORT(S)    AGE
redis-sentinel          100.67.212.50   <none>       26379/TCP  1s
moldy-peacock-redis-ha  100.64.180.235  <none>       6379/TCP   1s

==> v1beta1/Deployment
NAME                             DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
moldy-peacock-redis-ha           3        3        3           0          1s
moldy-peacock-redis-ha-sentinel  3        3        3           0          1s

==> v1beta1/StatefulSet
NAME                           DESIRED  CURRENT  AGE
moldy-peacock-redis-ha-master  1        1        1s

NOTES:
Redis cluster can be accessed via port 6379 on the following DNS name from within your cluster:
moldy-peacock-redis-ha.default.svc.cluster.local

To connect to your Redis server:

1. Run a Redis pod that you can use as a client:

   kubectl exec -it moldy-peacock-redis-ha-master-0 bash

2. Connect using the Redis CLI:

  redis-cli -h moldy-peacock-redis-ha.default.svc.cluster.local

$  kubectl get pv
NAME                                       CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM                                STORAGECLASS   REASON    AGE
pvc-d47b23dd-b5cf-11e7-839b-024f4b152bac   2Gi        RWO           Delete          Bound     default/moldy-peacock-redis-ha-pvc   gp2                      3m

$ kubectl get pvc
NAME                         STATUS    VOLUME                                     CAPACITY   ACCESSMODES   STORAGECLASS   AGE
moldy-peacock-redis-ha-pvc   Bound     pvc-d47b23dd-b5cf-11e7-839b-024f4b152bac   2Gi        RWO           gp2            3m

What you expected to happen: For it to work :)

How to reproduce it (as minimally and precisely as possible): Please see my commands above

Anything else we need to know: Please see attached:

screen shot 2017-10-20 at 3 51 05 pm

tuananh commented 7 years ago

is the PVC moldy-peacock-redis-ha-pvc stuck in Pending state?

smileisak commented 6 years ago

@esvirskiy & @tuananh there is a problem in pvc fixed here https://github.com/kubernetes/charts/pull/2543

fejta-bot commented 6 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta. /lifecycle stale

fejta-bot commented 6 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten /remove-lifecycle stale

fejta-bot commented 6 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close