Closed VoidViper closed 1 year ago
Update, found this https://githubmemory.com/repo/geerlingguy/internet-monitoring/issues/12
Unfortunately, changing resolvconf.conf and rebooting as suggested did not fix the issue
# Configuration for resolvconf(8)
# See resolvconf.conf(5) for details
resolv_conf=/etc/resolv.conf
# If you run a local name server, you should uncomment the below line and
# configure your subscribers configuration files below.
name_servers=127.0.0.1
# Mirror the Debian package defaults for the below resolvers
# so that resolvconf integrates seemlessly.
dnsmasq_resolv=/var/run/dnsmasq/resolv.conf
pdnsd_conf=/etc/pdnsd.conf
unbound_conf=/var/cache/unbound/resolvconf_resolvers.conf
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
71073f6e98b3 grafana/grafana "/run.sh" 8 hours ago Up 2 seconds 0.0.0.0:3030->3000/tcp internet-monitoring_grafana_1
0b6b1689913a prom/prometheus:v2.25.2 "/bin/prometheus --c…" 8 hours ago Up 5 minutes 0.0.0.0:9090->9090/tcp internet-monitoring_prometheus_1
37a935e834c7 nginxproxy/nginx-proxy "/app/docker-entrypo…" 8 hours ago Up 5 minutes 0.0.0.0:80->80/tcp internet-monitoring_nginx-proxy_1
5c0080954d3b prom/node-exporter "/bin/node_exporter …" 18 hours ago Up 5 minutes 0.0.0.0:9100->9100/tcp internet-monitoring_nodeexp_1
208beb2fb9ad prom/blackbox-exporter "/bin/blackbox_expor…" 18 hours ago Up 5 minutes 0.0.0.0:9115->9115/tcp internet-monitoring_ping_1
0a148bf01a48 miguelndecarvalho/speedtest-exporter "python -u exporter.…" 18 hours ago Up 5 minutes (healthy) 0.0.0.0:9798->9798/tcp internet-monitoring_speedtest_1
tcp 0 0 0.0.0.0:53 0.0.0.0:* LISTEN 1323/docker-proxy
Found the resolution.
Turns out if you use a custom network rather than the default bridge network for a container, you do not inherit the DNS of the host, but rather docker's 127.0.0.11 To fix the issue, I just set a custom dns from the jinja template
{% endif %}
grafana:
image: grafana/grafana
restart: always
volumes:
- grafana_data:/var/lib/grafana
- ./grafana/provisioning/:/etc/grafana/provisioning/
depends_on:
- prometheus
ports:
- 3030:3000
dns:
- 1.1.1.1
env_file:
- ./grafana/config.monitoring
networks:
- back-tier
- front-tier
{% if domain_name_enable and domain_name and domain_grafana %}
depends_on:
- nginx-proxy
environment:
- VIRTUAL_HOST={{ domain_grafana }}.{{ domain_name }}
- VIRTUAL_PORT=3000
{% endif %}
Thanks same issue and your DNS update here fixed for me too!
Hello,
After running the playbook, I can see that the grafana container is constantly restarting due to this error:
Error: ✗ Get "https://grafana.com/api/plugins/repo/flant-statusmap-panel": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I'm running a rpi 4b, with pihole already installed on the same host. DNS entries for grafana.home.local, pihole.home.local and prometheus.home.local have been correctly added. I'm guessing a DNS issue, but I cannot run neither ping, nor traceroute to ensure connectivity - it wants root, but by the time I run any commands, the container restarts.
uname -a
Linux raspberrypi 5.10.103-v8+ #1529 SMP PREEMPT Tue Mar 8 12:26:46 GMT 2022 aarch64 GNU/Linux
cat /etc/issue*
Raspbian GNU/Linux 10 \n \l Raspbian GNU/Linux 10
main.yml
docker ps
The descrepency between when some of the containers were created is due to trying to troubleshoot the issue.
Any help would be greatly appreciated.