carlosedp / cluster-monitoring

Cluster monitoring stack for clusters based on Prometheus Operator
MIT License
740 stars 200 forks source link

Ubuntu 18.04 issue #101

Closed Nahum48 closed 3 years ago

Nahum48 commented 4 years ago

Screenshot from 2020-11-18 06-28-25

Hi,

As you can see in my screen shot, i'm having trouble with working on the K3S cluster. i followed the installation instructions using Jeff geerling Youtube channel.

My environment is working with VM's not RPI. I have 3 workers and 1 master.

when i'm trying to browse to the :

prometheus-k8s prometheus.192.168.1.39.nip.io 192.168.1.37 80, 443 89m alertmanager-main alertmanager.192.168.1.39.nip.io 192.168.1.37 80, 443 89m grafana grafana.192.168.1.39.nip.io 192.168.1.37 80, 443 89m

all failed, and you can see an example with the screen shot i attached.

anyone know if there's an issue on running the cluster on Ubuntu?

magikmw commented 4 years ago

Your nip.io addresses point to 192.168.1.39, but your IP addresses on ingress is 192.168.1.37. Check the masterIP and suffixDomain vars in vars.jsonnet.

Nahum48 commented 4 years ago

Hi Magi and thanks for the comment.

2 questions please.

  1. does the addresses have to be the same in the mip.io and the ingress.io?
  2. I'm working in a home lab, not an organization domain, what's the DNS that i need to use? (does it necessary)
magikmw commented 4 years ago
  1. If you're using the nip.io domain the IP number in them has to align with the IP adress of the ingress. (so both columns HOST and ADDRESS have to align in kubectl get ingress -n monitoring output.
  2. nip.io is a special domain setup in a way that makes it feasible to use in this case - it points to any IP adress you'll put in as subdomain for *.nip.io. It's actually perfect to use if you don't have your own domain and DNS setup. Read more here: https://nip.io/
Nahum48 commented 4 years ago

Thank you magikmw for your quick reply.

I'll read more about it, and apply the settings in my Master host machine.

Nahum48 commented 4 years ago

my journey to make this cluster working....

new issue today, now instead of the error message that the page can not be displayed when i;m getting an error "service is not available" and as you can see in the screen shot i added, i have ALOT of CrashLoopBackOff so what is that, and how can i debug it.

As always, thank you for your answer, and remember i'm a rookie in all this Kubernetes world, so bear with me. :)

p.s i also added a screen shot of my vars.jsonnet, (maybe it'll help although i settings the settings are correct) Screenshot from 2020-11-19 02-53-07

magikmw commented 4 years ago

Your nip.io and IP is still misaligned. It seems like your ingress controller uses 192.168.1.17, but you're configuring cluster-monitoring to use 192.168.1.37 (it's in output of kubectl get ingress -n monitoring). Can't help you much more without getting into your cluster networking (maybe IP adresses change due to DHCP?).

As for the CrashLoopBackOff - try using kubectl describe command to see events generated while the pods were created. This article helped me a bunch before LINK.

Nahum48 commented 4 years ago

Hi there,

Good news, seems like we are in the right way to success. as you suggested, i changed the IP address to 192.168.1.17 in the vars file. and things started working. :+1:

As for the errors, can this give you any point what's the issue with the routing? my firewall service is down. I don't have IP Tables installed Screenshot from 2020-11-19 08-42-22

magikmw commented 4 years ago

I thought you need iptables for the internal networking magic? As for the errors, not sure. It seems like there's trouble reaching the metrics service. Maybe it's not installed or there's trouble with networking.

Nahum48 commented 4 years ago

Hi Magi,

I'll get back on the cluster on Sunday and see if there are networking issue (communications between the VM's) and i don't use IP Tables (it's not installed on my VM).

carlosedp commented 3 years ago

This appears to be the host configuration and not the monitoring stack issue.

Closing.