Open bennybhlin opened 5 years ago
No idea. @sebgoa maybe?
Can you upgrade helm to 2.11
Also try a helm init
again
Still no luck after using helm 2.11 (with tiller as well)
$ helm version
Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
$ kubectl get pods -n kube-system | grep tiller
tiller-deploy-845cffcd48-wtf9t 1/1 Running 0 19m
$ helm repo list
NAME URL
**stable https://kubernetes-charts.storage.googleapis.com**
local http://127.0.0.1:8879/charts
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈
$ helm search redis
NAME CHART VERSION APP VERSION DESCRIPTION
stable/prometheus-redis-exporter 0.3.2 0.21.1 Prometheus exporter for Redis metrics
**stable/redis** 4.2.1 4.0.11 Open source, advanced key-value store. It is often referr...
stable/redis-ha 2.2.3 4.0.8-r0 Highly available Redis cluster with multiple sentinels an...
stable/sensu 0.2.3 0.28 Sensu monitoring framework backed by the Redis transport
$ helm install stable/redis
**Error: no available release name found**
I think I got stable/redis installed:
$ helm install stable/redis
NAME: washing-kangaroo
LAST DEPLOYED: Fri Oct 19 14:55:31 2018
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1beta1/Deployment
NAME AGE
washing-kangaroo-redis-slave 1s
==> v1beta2/StatefulSet
washing-kangaroo-redis-master 1s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
washing-kangaroo-redis-slave-85d78594b6-vfrkd 0/1 ContainerCreating 0 1s
washing-kangaroo-redis-master-0 0/1 Pending 0 1s
==> v1/Secret
NAME AGE
washing-kangaroo-redis 1s
==> v1/ConfigMap
washing-kangaroo-redis-health 1s
==> v1/Service
washing-kangaroo-redis-master 1s
washing-kangaroo-redis-slave 1s
NOTES:
** Please be patient while the chart is being deployed **
Redis can be accessed via port 6379 on the following DNS names from within your cluster:
washing-kangaroo-redis-master.default.svc.cluster.local for read/write operations
washing-kangaroo-redis-slave.default.svc.cluster.local for read-only operations
To get your password run:
export REDIS_PASSWORD=$(kubectl get secret --namespace default washing-kangaroo-redis -o jsonpath="{.data.redis-password}" | base64 --decode)
To connect to your Redis server:
1. Run a Redis pod that you can use as a client:
kubectl run --namespace default washing-kangaroo-redis-client --rm --tty -i \
--env REDIS_PASSWORD=$REDIS_PASSWORD \
--image docker.io/bitnami/redis:4.0.11 -- bash
2. Connect using the Redis CLI:
redis-cli -h washing-kangaroo-redis-master -a $REDIS_PASSWORD
redis-cli -h washing-kangaroo-redis-slave -a $REDIS_PASSWORD
To connect to your database from outside the cluster execute the following commands:
kubectl port-forward --namespace default svc/washing-kangaroo-redis 6379:6379 &
redis-cli -h 127.0.0.1 -p 6379 -a $REDIS_PASSWORD
The answer is from here https://docs.helm.sh/using_helm/#role-based-access-control
Can I say it's due to RBAC is enabled in Kubernetes (my case is 1.12) so we need to run helm/tiller with specific service account?
Since the service account 'tiller' is bound to cluster role 'cluster-admin', is this secure? I remember ever reading from somewhere that binding a sa to a build-in high-privilege role is no a secured way.
OMG, why helm installed redis without error but redis pods are not running? I don't want to troubleshooting helm chart anymore......
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default unsung-pronghorn-redis-master-0 0/1 **Pending** 0 12m
default unsung-pronghorn-redis-slave-6546c6bbc8-7dctj 0/1 **CrashLoopBackOff** 6 12m
kube-system coredns-576cbf47c7-245q7 1/1 Running 0 19m
kube-system coredns-576cbf47c7-zhjm6 1/1 Running 0 19m
kube-system etcd-benny-vm-master 1/1 Running 0 18m
kube-system kube-apiserver-benny-vm-master 1/1 Running 0 18m
kube-system kube-controller-manager-benny-vm-master 1/1 Running 0 18m
kube-system kube-proxy-bmmpm 1/1 Running 0 18m
kube-system kube-proxy-zcwhj 1/1 Running 0 19m
kube-system kube-scheduler-benny-vm-master 1/1 Running 0 18m
kube-system kubernetes-dashboard-77fd78f978-jn894 1/1 Running 0 16m
kube-system tiller-deploy-6f6fd74b68-zcmbp 1/1 Running 0 14m
kube-system weave-net-dn97k 2/2 Running 0 17m
kube-system weave-net-vhv47 2/2 Running 0 17m
I doubt there's still an DNS lookup issue insdie pods/mangy-butterfly-redis-master-0 because again it uses a busybox:latest image while defining initial container:
//get pod stadus here:
$ kubectl get pods NAME READY STATUS RESTARTS AGE mangy-butterfly-redis-master-0 0/1 Pending 0 9m11s mangy-butterfly-redis-slave-66f687c955-p65sp 0/1 CrashLoopBackOff 5 9m11s
//describe master redis pod (which was pending)
$ kubectl describe pods/mangy-butterfly-redis-master-0
Name: mangy-butterfly-redis-master-0
Namespace: default
Priority: 0
PriorityClassName:
Warning FailedScheduling 2m43s (x34 over 7m51s) default-scheduler pod has unbound immediate PersistentVolumeClaims
//describe slave redis pod (which was crashedLoopBackOff)
$ kubectl describe pods/mangy-butterfly-redis-slave-66f687c955-p65sp
Name: mangy-butterfly-redis-slave-66f687c955-p65sp
Namespace: default
Priority: 0
PriorityClassName:
Normal Scheduled 20m default-scheduler Successfully assigned default/mangy-butterfly-redis-slave-66f687c955-p65sp to benny-vm-master Warning Unhealthy 19m kubelet, benny-vm-master Readiness probe failed: rpc error: code = 2 desc = oci runtime error: exec failed: container "a96679fd0ad3ef03b741e19ecbbb7615cd8929f83a623b42189c5ef5c4998a53" does not exist Warning Unhealthy 18m kubelet, benny-vm-master Readiness probe failed: rpc error: code = 2 desc = oci runtime error: exec failed: container "1b18d301d050d62f6f6d0683f17bd54318275d59a1b8533ad6924756f387bb33" does not exist Warning Unhealthy 18m kubelet, benny-vm-master Readiness probe errored: rpc error: code = Unknown desc = container not running (1b18d301d050d62f6f6d0683f17bd54318275d59a1b8533ad6924756f387bb33) Normal Created 18m (x3 over 19m) kubelet, benny-vm-master Created container Normal Started 18m (x3 over 19m) kubelet, benny-vm-master Started container Normal Pulled 18m (x3 over 19m) kubelet, benny-vm-master Successfully pulled image "docker.io/bitnami/redis:4.0.11" Warning Unhealthy 18m kubelet, benny-vm-master Readiness probe failed: rpc error: code = 2 desc = oci runtime error: exec failed: container "31eba0019038ac1771ad58d84feb6b6bde799878d682e34481bd09dd16503323" does not exist Warning Unhealthy 18m kubelet, benny-vm-master Readiness probe errored: rpc error: code = Unknown desc = container not running (31eba0019038ac1771ad58d84feb6b6bde799878d682e34481bd09dd16503323) Warning Unhealthy 10m (x7 over 19m) kubelet, benny-vm-master Readiness probe failed: Warning: Using a password with '-a' option on the command line interface may not be safe. Could not connect to Redis at localhost:6379: Connection refused Warning: Using a password with '-a' option on the command line interface may not be safe. Normal Pulling 5m16s (x8 over 20m) kubelet, benny-vm-master pulling image "docker.io/bitnami/redis:4.0.11" Warning BackOff 23s (x73 over 18m) kubelet, benny-vm-master Back-off restarting failed container $
Following this recipe to install redis via helm/tiller but got this error message:
Is it still associated with Kubernetes version? I'd used helm 2.7.2 already.