helm / charts

⚠️(OBSOLETE) Curated applications for Kubernetes
Apache License 2.0
15.49k stars 16.78k forks source link

changing ownership of '/usr/share/elasticsearch/data': Operation not permitted #11464

Closed marcingorzynski77 closed 5 years ago

marcingorzynski77 commented 5 years ago

Is this a request for help?:

Yes

Version of Helm and Kubernetes:

Kube 1.13 Helm 2.12

Which chart: elastic-stack

What happened: Master and data node fail to initialise.

What you expected to happen:

I expect them to be up

NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES default busybox 1/1 Running 3 3h12m 10.244.1.4 lab-ap-04 default lame-fox-nfs-client-provisioner-78bcc4bb85-rxgbh 1/1 Running 0 21h 10.244.2.20 lab-ap-05 kube-system coredns-86c58d9df4-b68jb 1/1 Running 0 7h49m 10.244.2.24 lab-ap-05 kube-system coredns-86c58d9df4-kfkrr 1/1 Running 0 7h49m 10.244.4.11 lab-ap-03 kube-system etcd-lab-ap-01 1/1 Running 0 41d 192.168.1.240 lab-ap-01 kube-system kube-apiserver-lab-ap-01 1/1 Running 0 41d 192.168.1.240 lab-ap-01 kube-system kube-controller-manager-lab-ap-01 1/1 Running 2 41d 192.168.1.240 lab-ap-01 kube-system kube-flannel-ds-amd64-4gk7r 1/1 Running 0 41d 192.168.1.240 lab-ap-01 kube-system kube-flannel-ds-amd64-8l6hf 1/1 Running 0 41d 192.168.1.243 lab-ap-04 kube-system kube-flannel-ds-amd64-9bp4t 1/1 Running 0 41d 192.168.1.244 lab-ap-05 kube-system kube-flannel-ds-amd64-kfvhj 1/1 Running 0 41d 192.168.1.241 lab-ap-02 kube-system kube-flannel-ds-amd64-swczd 1/1 Running 0 41d 192.168.1.242 lab-ap-03 kube-system kube-proxy-bqlc4 1/1 Running 0 41d 192.168.1.240 lab-ap-01 kube-system kube-proxy-gpltp 1/1 Running 0 41d 192.168.1.242 lab-ap-03 kube-system kube-proxy-k7jkj 1/1 Running 0 41d 192.168.1.243 lab-ap-04 kube-system kube-proxy-q24mj 1/1 Running 0 41d 192.168.1.244 lab-ap-05 kube-system kube-proxy-sff7d 1/1 Running 0 41d 192.168.1.241 lab-ap-02 kube-system kube-scheduler-lab-ap-01 1/1 Running 2 41d 192.168.1.240 lab-ap-01 kube-system kubernetes-dashboard-57df4db6b-wlchv 1/1 Running 0 41d 10.244.0.103 lab-ap-01 kube-system tiller-deploy-dbb85cb99-8vlv9 1/1 Running 0 10d 10.244.1.3 lab-ap-04 logging elastic-stack-elasticsearch-client-77ff4559dd-7f6lb 0/1 Running 0 3h7m 10.244.2.25 lab-ap-05 logging elastic-stack-elasticsearch-client-77ff4559dd-tbsvf 0/1 Running 0 3h7m 10.244.3.81 lab-ap-02 logging elastic-stack-elasticsearch-data-0 0/1 Init:CrashLoopBackOff 5 5m47s 10.244.3.84 lab-ap-02 logging elastic-stack-elasticsearch-master-0 0/1 Init:CrashLoopBackOff 5 5m31s 10.244.2.26 lab-ap-05 logging elastic-stack-kibana-7dfd4b47dc-wk6lv 1/1 Running 0 3h7m 10.244.3.80 lab-ap-02 logging elastic-stack-logstash-0 1/1 Running 3 3h7m 10.244.4.12 lab-ap-03

Unfortunately I do not see anything useful in the logs

kubectl logs elastic-stack-elasticsearch-master-0 --namespace logging Error from server (BadRequest): container "elasticsearch" in pod "elastic-stack-elasticsearch-master-0" is waiting to start: PodInitializing

Update: I can see this in the logs :

chown: changing ownership of '/usr/share/elasticsearch/data': Operation not permitted

I can see that all pvc have been allocated

kubectl get pv,pvc --all-namespaces -o=wide NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pvc-a5220e4f-321a-11e9-adb6-0050568a6782 30Gi RWO Delete Bound logging/data-elastic-stack-elasticsearch-data-0 nfs-client 3h15m persistentvolume/pvc-a55d5bfb-321a-11e9-adb6-0050568a6782 4Gi RWO Delete Bound logging/data-elastic-stack-elasticsearch-master-0 nfs-client 3h15m persistentvolume/pvc-a581abf9-321a-11e9-adb6-0050568a6782 2Gi RWO Delete Bound logging/data-elastic-stack-logstash-0 nfs-client 3h15m

NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE logging persistentvolumeclaim/data-elastic-stack-elasticsearch-data-0 Bound pvc-a5220e4f-321a-11e9-adb6-0050568a6782 30Gi RWO nfs-client 3h15m logging persistentvolumeclaim/data-elastic-stack-elasticsearch-master-0 Bound pvc-a55d5bfb-321a-11e9-adb6-0050568a6782 4Gi RWO nfs-client 3h15m logging persistentvolumeclaim/data-elastic-stack-logstash-0 Bound pvc-a581abf9-321a-11e9-adb6-0050568a6782 2Gi RWO nfs-client 3h15m

**I am not sure what else can I do to troubleshoot this problem.

Any help will be greatly appreciated.**

How to reproduce it (as minimally and precisely as possible):

helm install --name elastic-stack --namespace logging stable/elastic-stack

Anything else we need to know:

vsliouniaev commented 5 years ago

Apologies, I linked the wrong issue and it got closed. The original author should be able to get this opened again

marcingorzynski77 commented 5 years ago

No problem at all

On Thu, 21 Feb 2019, 08:25 Vasily Sliouniaev, notifications@github.com wrote:

Apologies, I linked the wrong issue and it got closed. The original author should be able to get this opened again

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/helm/charts/issues/11464#issuecomment-465906380, or mute the thread https://github.com/notifications/unsubscribe-auth/AL1qVNwyufegEZj6TI8bSfZDRRglL1qHks5vPlgGgaJpZM4a_OKG .

jimmiebtlr commented 5 years ago

Any hints on how you solved this, think I'm seeing 1/2 pods in a production environment behaving this way at the moment. Same error message.