k8s-cookbook / recipes

Kubernetes Cookbook
http://k8s.cookbook.fyi
Apache License 2.0
165 stars 88 forks source link

Recipe 14.2 Using Helm to Install Applications #13

Open bennybhlin opened 5 years ago

bennybhlin commented 5 years ago

Following this recipe to install redis via helm/tiller but got this error message:

default

Is it still associated with Kubernetes version? I'd used helm 2.7.2 already.

mhausenblas commented 5 years ago

No idea. @sebgoa maybe?

sebgoa commented 5 years ago

Can you upgrade helm to 2.11

Also try a helm init again

bennybhlin commented 5 years ago

Still no luck after using helm 2.11 (with tiller as well)

$ helm version
Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
$ kubectl get pods -n kube-system | grep tiller
tiller-deploy-845cffcd48-wtf9t            1/1     Running   0          19m
$ helm repo list
NAME    URL
**stable  https://kubernetes-charts.storage.googleapis.com**
local   http://127.0.0.1:8879/charts
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈
$ helm search redis
NAME                                    CHART VERSION   APP VERSION     DESCRIPTION
stable/prometheus-redis-exporter        0.3.2           0.21.1          Prometheus exporter for Redis metrics
**stable/redis**                            4.2.1           4.0.11          Open source, advanced key-value store. It is often referr...
stable/redis-ha                         2.2.3           4.0.8-r0        Highly available Redis cluster with multiple sentinels an...
stable/sensu                            0.2.3           0.28            Sensu monitoring framework backed by the Redis transport
$ helm install stable/redis
**Error: no available release name found**
bennybhlin commented 5 years ago

I think I got stable/redis installed:

$ helm install stable/redis
NAME:   washing-kangaroo
LAST DEPLOYED: Fri Oct 19 14:55:31 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1beta1/Deployment
NAME                          AGE
washing-kangaroo-redis-slave  1s

==> v1beta2/StatefulSet
washing-kangaroo-redis-master  1s

==> v1/Pod(related)

NAME                                           READY  STATUS             RESTARTS  AGE
washing-kangaroo-redis-slave-85d78594b6-vfrkd  0/1    ContainerCreating  0         1s
washing-kangaroo-redis-master-0                0/1    Pending            0         1s

==> v1/Secret

NAME                    AGE
washing-kangaroo-redis  1s

==> v1/ConfigMap
washing-kangaroo-redis-health  1s

==> v1/Service
washing-kangaroo-redis-master  1s
washing-kangaroo-redis-slave   1s

NOTES:
** Please be patient while the chart is being deployed **
Redis can be accessed via port 6379 on the following DNS names from within your cluster:

washing-kangaroo-redis-master.default.svc.cluster.local for read/write operations
washing-kangaroo-redis-slave.default.svc.cluster.local for read-only operations

To get your password run:

    export REDIS_PASSWORD=$(kubectl get secret --namespace default washing-kangaroo-redis -o jsonpath="{.data.redis-password}" | base64 --decode)

To connect to your Redis server:

1. Run a Redis pod that you can use as a client:

   kubectl run --namespace default washing-kangaroo-redis-client --rm --tty -i \
    --env REDIS_PASSWORD=$REDIS_PASSWORD \
   --image docker.io/bitnami/redis:4.0.11 -- bash

2. Connect using the Redis CLI:
   redis-cli -h washing-kangaroo-redis-master -a $REDIS_PASSWORD
   redis-cli -h washing-kangaroo-redis-slave -a $REDIS_PASSWORD

To connect to your database from outside the cluster execute the following commands:

    kubectl port-forward --namespace default svc/washing-kangaroo-redis 6379:6379 &
    redis-cli -h 127.0.0.1 -p 6379 -a $REDIS_PASSWORD
bennybhlin commented 5 years ago

The answer is from here https://docs.helm.sh/using_helm/#role-based-access-control

Can I say it's due to RBAC is enabled in Kubernetes (my case is 1.12) so we need to run helm/tiller with specific service account?

Since the service account 'tiller' is bound to cluster role 'cluster-admin', is this secure? I remember ever reading from somewhere that binding a sa to a build-in high-privilege role is no a secured way.

bennybhlin commented 5 years ago

OMG, why helm installed redis without error but redis pods are not running? I don't want to troubleshooting helm chart anymore......

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                            READY   STATUS             RESTARTS   AGE
default       unsung-pronghorn-redis-master-0                 0/1     **Pending**            0          12m
default       unsung-pronghorn-redis-slave-6546c6bbc8-7dctj   0/1     **CrashLoopBackOff**   6          12m
kube-system   coredns-576cbf47c7-245q7                        1/1     Running            0          19m
kube-system   coredns-576cbf47c7-zhjm6                        1/1     Running            0          19m
kube-system   etcd-benny-vm-master                            1/1     Running            0          18m
kube-system   kube-apiserver-benny-vm-master                  1/1     Running            0          18m
kube-system   kube-controller-manager-benny-vm-master         1/1     Running            0          18m
kube-system   kube-proxy-bmmpm                                1/1     Running            0          18m
kube-system   kube-proxy-zcwhj                                1/1     Running            0          19m
kube-system   kube-scheduler-benny-vm-master                  1/1     Running            0          18m
kube-system   kubernetes-dashboard-77fd78f978-jn894           1/1     Running            0          16m
kube-system   tiller-deploy-6f6fd74b68-zcmbp                  1/1     Running            0          14m
kube-system   weave-net-dn97k                                 2/2     Running            0          17m
kube-system   weave-net-vhv47                                 2/2     Running            0          17m
bennybhlin commented 5 years ago

I doubt there's still an DNS lookup issue insdie pods/mangy-butterfly-redis-master-0 because again it uses a busybox:latest image while defining initial container:

//get pod stadus here:

$ kubectl get pods NAME READY STATUS RESTARTS AGE mangy-butterfly-redis-master-0 0/1 Pending 0 9m11s mangy-butterfly-redis-slave-66f687c955-p65sp 0/1 CrashLoopBackOff 5 9m11s

//describe master redis pod (which was pending)

$ kubectl describe pods/mangy-butterfly-redis-master-0 Name: mangy-butterfly-redis-master-0 Namespace: default Priority: 0 PriorityClassName: Node: Labels: app=redis chart=redis-4.2.1 controller-revision-hash=mangy-butterfly-redis-master-65bb7f84d7 release=mangy-butterfly role=master statefulset.kubernetes.io/pod-name=mangy-butterfly-redis-master-0 Annotations: checksum/configmap: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b checksum/health: 06a9a2fd8a4d658a727ad5506454a26a3a733f6850f5970097992d2384adb7fd checksum/secret: b1d4be70e582e8ab0e0b5fb47bfdd4802ec06d3274ef2b5585f7ef2e300fbef1 Status: Pending IP: Controlled By: StatefulSet/mangy-butterfly-redis-master Init Containers: volume-permissions: Image: docker.io/busybox:latest Port: Host Port: Command: /bin/chown -R 1001:1001 /bitnami/redis/data Environment: Mounts: /bitnami/redis/data from redis-data (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-j7qd2 (ro) Containers: mangy-butterfly-redis: Image: docker.io/bitnami/redis:4.0.11 Port: 6379/TCP Host Port: 0/TCP Liveness: exec [sh -c /health/ping_local.sh] delay=30s timeout=5s period=10s #success=1 #failure=5 Readiness: exec [sh -c /health/ping_local.sh] delay=5s timeout=1s period=10s #success=1 #failure=5 Environment: REDIS_REPLICATION_MODE: master REDIS_PASSWORD: <set to the key 'redis-password' in secret 'mangy-butterfly-redis'> Optional: false REDIS_PORT: 6379 REDIS_DISABLE_COMMANDS: FLUSHDB,FLUSHALL Mounts: /bitnami/redis/data from redis-data (rw) /health from health (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-j7qd2 (ro) Conditions: Type Status PodScheduled False Volumes: redis-data: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: redis-data-mangy-butterfly-redis-master-0 ReadOnly: false health: Type: ConfigMap (a volume populated by a ConfigMap) Name: mangy-butterfly-redis-health Optional: false default-token-j7qd2: Type: Secret (a volume populated by a Secret) SecretName: default-token-j7qd2 Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message


Warning FailedScheduling 2m43s (x34 over 7m51s) default-scheduler pod has unbound immediate PersistentVolumeClaims

//describe slave redis pod (which was crashedLoopBackOff)

$ kubectl describe pods/mangy-butterfly-redis-slave-66f687c955-p65sp Name: mangy-butterfly-redis-slave-66f687c955-p65sp Namespace: default Priority: 0 PriorityClassName: Node: benny-vm-master/192.168.0.200 Start Time: Sun, 21 Oct 2018 16:58:59 +0800 Labels: app=redis chart=redis-4.2.1 pod-template-hash=66f687c955 release=mangy-butterfly role=slave Annotations: checksum/configmap: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b checksum/health: 06a9a2fd8a4d658a727ad5506454a26a3a733f6850f5970097992d2384adb7fd checksum/secret: bea53a1ed5c9e71e682f1f5890a424401bc2a44b3c6f6b277b60add291f74974 Status: Running IP: 10.32.0.6 Controlled By: ReplicaSet/mangy-butterfly-redis-slave-66f687c955 Containers: mangy-butterfly-redis: Container ID: docker://00b0314f82e44d7fc7bd4b44a457fa47eab7cc2bb0a43a20a633a813c7d09b54 Image: docker.io/bitnami/redis:4.0.11 Image ID: docker-pullable://bitnami/redis@sha256:bafa247f093b886f29be5db16b21943c5c659c90b7dd20a66ce9549b07b4a12f Port: 6379/TCP Host Port: 0/TCP State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Sun, 21 Oct 2018 17:14:14 +0800 Finished: Sun, 21 Oct 2018 17:14:44 +0800 Ready: False Restart Count: 7 Liveness: exec [sh -c /health/ping_local_and_master.sh] delay=30s timeout=5s period=10s #success=1 #failure=5 Readiness: exec [sh -c /health/ping_local_and_master.sh] delay=5s timeout=1s period=10s #success=1 #failure=5 Environment: REDIS_REPLICATION_MODE: slave REDIS_MASTER_HOST: mangy-butterfly-redis-master REDIS_PORT: 6379 REDIS_MASTER_PORT_NUMBER: 6379 REDIS_PASSWORD: <set to the key 'redis-password' in secret 'mangy-butterfly-redis'> Optional: false REDIS_MASTER_PASSWORD: <set to the key 'redis-password' in secret 'mangy-butterfly-redis'> Optional: false REDIS_DISABLE_COMMANDS: FLUSHDB,FLUSHALL Mounts: /health from health (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-j7qd2 (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: health: Type: ConfigMap (a volume populated by a ConfigMap) Name: mangy-butterfly-redis-health Optional: false default-token-j7qd2: Type: Secret (a volume populated by a Secret) SecretName: default-token-j7qd2 Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message


Normal Scheduled 20m default-scheduler Successfully assigned default/mangy-butterfly-redis-slave-66f687c955-p65sp to benny-vm-master Warning Unhealthy 19m kubelet, benny-vm-master Readiness probe failed: rpc error: code = 2 desc = oci runtime error: exec failed: container "a96679fd0ad3ef03b741e19ecbbb7615cd8929f83a623b42189c5ef5c4998a53" does not exist Warning Unhealthy 18m kubelet, benny-vm-master Readiness probe failed: rpc error: code = 2 desc = oci runtime error: exec failed: container "1b18d301d050d62f6f6d0683f17bd54318275d59a1b8533ad6924756f387bb33" does not exist Warning Unhealthy 18m kubelet, benny-vm-master Readiness probe errored: rpc error: code = Unknown desc = container not running (1b18d301d050d62f6f6d0683f17bd54318275d59a1b8533ad6924756f387bb33) Normal Created 18m (x3 over 19m) kubelet, benny-vm-master Created container Normal Started 18m (x3 over 19m) kubelet, benny-vm-master Started container Normal Pulled 18m (x3 over 19m) kubelet, benny-vm-master Successfully pulled image "docker.io/bitnami/redis:4.0.11" Warning Unhealthy 18m kubelet, benny-vm-master Readiness probe failed: rpc error: code = 2 desc = oci runtime error: exec failed: container "31eba0019038ac1771ad58d84feb6b6bde799878d682e34481bd09dd16503323" does not exist Warning Unhealthy 18m kubelet, benny-vm-master Readiness probe errored: rpc error: code = Unknown desc = container not running (31eba0019038ac1771ad58d84feb6b6bde799878d682e34481bd09dd16503323) Warning Unhealthy 10m (x7 over 19m) kubelet, benny-vm-master Readiness probe failed: Warning: Using a password with '-a' option on the command line interface may not be safe. Could not connect to Redis at localhost:6379: Connection refused Warning: Using a password with '-a' option on the command line interface may not be safe. Normal Pulling 5m16s (x8 over 20m) kubelet, benny-vm-master pulling image "docker.io/bitnami/redis:4.0.11" Warning BackOff 23s (x73 over 18m) kubelet, benny-vm-master Back-off restarting failed container $