gregbkr / kubernetes-kargo-logging-monitoring

Deploy kubernetes cluster with kargo
https://greg.satoshi.tech/k8s-your-base-setup-towards-container-orchestration/
216 stars 67 forks source link

Unable to mount volumes for pod logging/elasticsearch #5

Open gsaslis opened 7 years ago

gsaslis commented 7 years ago

Hey there,

Thanks for putting all this together!! Was exactly what I was looking for!

Originally, I used kubespray's (btw, you may want to "replace all" here, after the recent rename) efk_enabled flag, just as you suggest in section 2 in the readme. That worked fine, but: a. I had an issue with the KIBANA_BASE_URL which I probably need to raise there, b. they're still using the old 2.4.x versions of ES/Kibana

so, i wanted to give your kubectl apply -f logging approach a go, but I ran into an issue with the PVC you have there.

Here's the error message:

Unable to mount volumes for pod "elasticsearch-1832401789-f41vb_logging(5cddff81-75fa-11e7-ba5a-0019994e86b3)": timeout expired waiting for volumes to attach/mount for pod "logging"/"elasticsearch-1832401789-f41vb". list of unattached/unmounted volumes=[es-data]
Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "logging"/"elasticsearch-1832401789-f41vb". list of unattached/unmounted volumes=[es-data]

Was I supposed to have set up some dynamic volume provisioning for this to work?

gregbkr commented 7 years ago

Hi @gsaslis, you welcome! Seems like your claim for volume didn't work out.

Techs-Y commented 7 years ago

The same issue. There is no any PV for PVc created in playbooks

selvik commented 6 years ago

@gsaslis @Techs-Y Did the tip above help fix the PV/PVC issue for you?

gsaslis commented 6 years ago

@selvik i think my problem back then was that i didn't have dynamic provisioning set up, so i ended up having to manually add the StorageClass myself.

@gregbkr do you think it would make sense adding an example like this to your repo?

gregbkr commented 6 years ago

@gsaslis : sure, please make a pull request with the documentation addition, I will merge it. I don't have the environment to test at the moment, sorry if I couldn't help much. Thank you for your help!

sahil-sharma commented 6 years ago

Hello, I too ran into the same issue (volume failed to mount) but as you suggested I commented out the volume part from elasticsearch-deployment.yaml file. # kubectl apply -f logging worked fine. Got access to Kibana dashbaord and ES:30200 From Kibana dashbaord I am unable to do this (as you suggested): Check logs coming in kibana, you just need to refresh, select Time-field name : @timestamps + create What if my cluster in on cloud then how would I load this file (management > Saved Object > Import > logging/dashboards/elk-v1.json)? Any hints on this. I moved on to next step: Monitoring First thing, there are two folders in your repo with name monitoring and monitoring2. What's the difference? When I ran this command: # kubectl apply -f monitoring I got an error related to node-exporter image. You were using an image which is not available. I updated it to image: node-exporter:v0.15.2 and it worked. When I try to access grafana page but no logs and more surprise to me is: fluentd pods are not running and throwing an error CrashLoopBackOff. fluentd_failing On kubectl describe on fluentd pod I got a message: fluentd_describe ERROR: Back-off restarting failed container Don't know what it is happening. Can one suggest? Thanks in advance!