Closed rodriguestiago0 closed 2 years ago
Sorry, what is the question? The logs look like everything is working
I can't access the web page. I get the error saying File not found: /etc/pihole/setupVars.conf.
Do you have any idea why? did I do anything wrong?
Hi, could you go a little bit more in detail with the way you access the web UI and where the error is being displayed?
Also what is the status of your *-web
service?
I looks like you've set the wrong service type. You've set serviceWeb.type:
to LoadBalancerc
. It should be LoadBalancer
.
Hey,
Sorry for the delay.
This all the services I'm running. And when I go to the browser and go to http://192.168.1.201/ the result is a white page with this "[ERROR] File not found: /etc/pihole/setupVars.conf"
Thank you Updated values file
# Default values for pihole.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
# -- The number of replicas
replicaCount: 1
# -- The `spec.strategyTpye` for updates
strategyType: RollingUpdate
# -- The maximum number of Pods that can be created over the desired number of `ReplicaSet` during updating.
maxSurge: 1
# -- The maximum number of Pods that can be unavailable during updating
maxUnavailable: 1
image:
# -- the repostory to pull the image from
repository: "pihole/pihole"
# -- the docker tag
tag: v5.8.1
# -- the pull policy
pullPolicy: IfNotPresent
dnsHostPort:
# -- set this to true to enable dnsHostPort
enabled: false
# -- default port for this pod
port: 53
# -- Configuration for the DNS service on port 53
serviceDns:
# -- deploys a mixed (TCP + UDP) Service instead of separate ones
mixedService: false
# -- `spec.type` for the DNS Service
type: LoadBalancer
# -- The port of the DNS service
port: 53
# -- `spec.externalTrafficPolicy` for the DHCP Service
externalTrafficPolicy: Local
# -- A fixed `spec.loadBalancerIP` for the DNS Service
loadBalancerIP: ""
# a fixed LoadBalancer IP
# -- Annotations for the DNS service
annotations:
metallb.universe.tf/address-pool: default
metallb.universe.tf/allow-shared-ip: pihole-svc
# -- Configuration for the DHCP service on port 67
serviceDhcp:
# -- Generate a Service resource for DHCP traffic
enabled: false
# -- `spec.type` for the DHCP Service
type: NodePort
# -- `spec.externalTrafficPolicy` for the DHCP Service
externalTrafficPolicy: Local
# -- A fixed `spec.loadBalancerIP` for the DHCP Service
loadBalancerIP: ""
# -- Annotations for the DHCP service
annotations: {}
# metallb.universe.tf/address-pool: network-services
# metallb.universe.tf/allow-shared-ip: pihole-svc
# -- Configuration for the web interface service
serviceWeb:
# -- Configuration for the HTTP web interface listener
http:
# -- Generate a service for HTTP traffic
enabled: true
# -- The port of the web HTTP service
port: 80
# -- Configuration for the HTTPS web interface listener
https:
# -- Generate a service for HTTPS traffic
enabled: true
# -- The port of the web HTTPS service
port: 443
# -- `spec.type` for the web interface Service
type: LoadBalancer
# -- `spec.externalTrafficPolicy` for the web interface Service
externalTrafficPolicy: Local
# -- A fixed `spec.loadBalancerIP` for the web interface Service
loadBalancerIP: ""
# a fixed LoadBalancer IP
# -- Annotations for the DHCP service
annotations:
metallb.universe.tf/address-pool: default
metallb.universe.tf/allow-shared-ip: pihole-svc
virtualHost: pi.hole
# -- Configuration for the Ingress
ingress:
# -- Generate a Ingress resource
enabled: false
# -- Annotations for the ingress
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
path: /
hosts:
# virtualHost (default value is pi.hole) will be appended to the hosts
- chart-example.local
tls: []
# - secretName: chart-example-tls
# hosts:
# #- virtualHost (default value is pi.hole) will be appended to the hosts
# - chart-example.local
# -- Probes configuration
probes:
# -- probes.liveness -- Configure the healthcheck for the ingress controller
liveness:
# -- Generate a liveness probe
enabled: true
initialDelaySeconds: 60
failureThreshold: 10
timeoutSeconds: 5
readiness:
# -- Generate a readiness probe
enabled: true
initialDelaySeconds: 60
failureThreshold: 3
timeoutSeconds: 5
# -- We usually recommend not to specify default resources and to leave this as a conscious
# -- choice for the user. This also increases chances charts run on environments with little
# -- resources, such as Minikube. If you do want to specify resources, uncomment the following
# -- lines, adjust them as necessary, and remove the curly braces after 'resources:'.
resources: {}
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
# -- `spec.PersitentVolumeClaim` configuration
persistentVolumeClaim:
# -- set to true to use pvc
enabled: true
# -- specify an existing `PersistentVolumeClaim` to use
existingClaim: "pihole"
# -- Annotations for the `PersitentVolumeClaim`
annotations: {}
accessModes:
- ReadWriteOnce
size: "500Mi"
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
## If subPath is set mount a sub folder of a volume instead of the root of the volume.
## This is especially handy for volume plugins that don't natively support sub mounting (like glusterfs).
## subPath: "pihole"
nodeSelector: {}
tolerations: []
# Reference: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/
topologySpreadConstraints: []
# - maxSkew: <integer>
# topologyKey: <string>
# whenUnsatisfiable: <string>
# labelSelector: <object>
affinity: {}
# -- Administrator password when not using an existing secret (see below)
#adminPassword: "admin"
# -- Use an existing secret for the admin password.
admin:
# -- Specify an existing secret to use as admin password
existingSecret: "pihole-secret"
# -- Specify the key inside the secret to use
passwordKey: "password"
# -- extraEnvironmentVars is a list of extra enviroment variables to set for pihole to use
extraEnvVars:
TZ: "Europe/Lisbon"
# -- extraEnvVarsSecret is a list of secrets to load in as environment variables.
extraEnvVarsSecret: {}
# env_var:
# name: secret-name
# key: secret-key
# -- default upstream DNS 1 server to use
DNS1: "8.8.8.8"
# -- default upstream DNS 2 server to use
DNS2: "8.8.4.4"
antiaff:
# -- set to true to enable antiaffinity (example: 2 pihole DNS in the same cluster)
enabled: false
# -- Here you can set the pihole release (you set in `helm install <releasename> ...`)
# you want to avoid
avoidRelease: pihole1
# -- Here you can choose between preferred or required
strict: true
doh:
# -- set to true to enabled DNS over HTTPs via cloudflared
enabled: false
name: "cloudflared"
repository: "crazymax/cloudflared"
tag: latest
pullPolicy: IfNotPresent
# -- Here you can pass environment variables to the DoH container, for example:
envVars: {}
# TUNNEL_DNS_UPSTREAM: "https://1.1.1.2/dns-query,https://1.0.0.2/dns-query"
# -- Probes configuration
probes:
# -- Configure the healthcheck for the doh container
liveness:
# -- set to true to enable liveness probe
enabled: true
# -- defines the initial delay for the liveness probe
initialDelaySeconds: 60
# -- defines the failure threshold for the liveness probe
failureThreshold: 10
# -- defines the timeout in secondes for the liveness probe
timeoutSeconds: 5
dnsmasq:
# -- Add upstream dns servers. All lines will be added to the pihole dnsmasq configuration
upstreamServers: []
# - server=/foo.bar/192.168.178.10
# - server=/bar.foo/192.168.178.11
# -- Add custom dns entries to override the dns resolution. All lines will be added to the pihole dnsmasq configuration.
customDnsEntries: []
# - address=/foo.bar/192.168.178.10
# - address=/bar.foo/192.168.178.11
# -- Dnsmasq reads the /etc/hosts file to resolve ips. You can add additional entries if you like
additionalHostsEntries: []
# - 192.168.0.3 host4
# - 192.168.0.4 host5
# -- Static DHCP config
staticDhcpEntries: []
# staticDhcpEntries:
# - dhcp-host=MAC_ADDRESS,IP_ADDRESS,HOSTNAME
# -- Other options
customSettings:
# otherSettings:
# - rebind-domain-ok=/plex.direct/
# -- Here we specify custom cname entries that should point to `A` records or
# elements in customDnsEntries array.
# The format should be:
# - cname=cname.foo.bar,foo.bar
# - cname=cname.bar.foo,bar.foo
# - cname=cname record,dns record
customCnameEntries: []
# Here we specify custom cname entries that should point to `A` records or
# elements in customDnsEntries array.
# The format should be:
# - cname=cname.foo.bar,foo.bar
# - cname=cname.bar.foo,bar.foo
# - cname=cname record,dns record
# -- list of adlists to import during initial start of the container
adlists: {}
# If you want to provide blocklists, add them here.
# - https://hosts-file.net/grm.txt
# - https://reddestdream.github.io/Projects/MinimalHosts/etc/MinimalHostsBlocker/minimalhosts
# -- list of whitelisted domains to import during initial start of the container
whitelist: {}
# If you want to provide whitelisted domains, add them here.
# - clients4.google.com
# -- list of blacklisted domains to import during initial start of the container
blacklist: {}
# If you want to have special domains blacklisted, add them here
# - *.blackist.com
# -- list of blacklisted regex expressions to import during initial start of the container
regex: {}
# Add regular expression blacklist items
# - (^|\.)facebook\.com$
# -- values that should be added to pihole-FTL.conf
ftl: {}
# Add values for pihole-FTL.conf
# MAXDBDAYS: 14
# -- port the container should use to expose HTTP traffic
webHttp: "80"
# -- port the container should use to expose HTTPS traffic
webHttps: "443"
# -- should the container use host network
hostNetwork: "true"
# -- should container run in privileged mode
privileged: "true"
customVolumes:
# -- set this to true to enable custom volumes
enabled: false
# -- any volume type can be used here
config: {}
# hostPath:
# path: "/mnt/data"
# -- Additional annotations for pods
podAnnotations: {}
# Example below allows Prometheus to scape on metric port (requires pihole-exporter sidecar enabled)
# prometheus.io/port: '9617'
# prometheus.io/scrape: 'true'
monitoring:
# -- Preferably adding prometheus scrape annotations rather than enabling podMonitor.
podMonitor:
# -- set this to true to enable podMonitor
enabled: false
# -- Sidecar configuration
sidecar:
# -- set this to true to enable podMonitor as sidecar
enabled: false
port: 9617
image:
repository: ekofr/pihole-exporter
tag: 0.0.10
pullPolicy: IfNotPresent
resources:
limits:
memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
podDnsConfig:
enabled: true
policy: "None"
nameservers:
- 127.0.0.1
- 8.8.8.8
More information:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: "pihole"
name: "pihole"
spec:
storageClassName: "manual"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "500Mi"
apiVersion: v1
kind: PersistentVolume
metadata:
name: "pihole"
labels:
type: "local"
spec:
storageClassName: "manual"
capacity:
storage: "500Mi"
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/HDD/pihole"
/data/HDD/pihole ❯ ls -la pi@kube-master 17:57:56
total 7432
drwxrwsr-x 3 999 spi 4096 Sep 21 17:58 .
drwxrwsr-- 6 root users 4096 Sep 21 16:36 ..
-rw-r--r-- 1 root spi 14 Sep 21 17:25 GitHubVersions
-rw-r--r-- 1 root spi 0 Sep 21 17:25 custom.list
-rw-r--r-- 1 root spi 618 Sep 21 17:25 dns-servers.conf
-rw-rw-r-- 1 999 spi 5558272 Sep 21 17:25 gravity.db
-rw-r--r-- 1 root spi 1940475 Sep 21 17:25 list.1.raw.githubusercontent.com.domains
-rw-rw-r-- 1 root spi 95 Sep 21 17:25 list.1.raw.githubusercontent.com.domains.sha1
-rw-r--r-- 1 root spi 48 Sep 21 17:25 local.list
-rw-r--r-- 1 root spi 20 Sep 21 17:50 localbranches
-rw-r--r-- 1 root spi 44 Sep 21 17:50 localversions
drwxrwsr-- 2 root spi 4096 Sep 21 17:25 migration_backup
-rw-r--r-- 1 999 spi 0 Sep 21 17:25 pihole-FTL.conf
-rw-r--r-- 1 root spi 53248 Sep 21 17:58 pihole-FTL.db
-rw-rw-r-- 1 root spi 254 Sep 21 17:25 setupVars.conf
-rw-rw-r-- 1 root spi 0 Sep 21 17:25 setupVars.conf.update.bak
the DNS server is working
are there files on your host under the given path /data/HDD/pihole
?
yes. you can see it in my last reply.
/data/HDD/pihole ❯ ls -la pi@kube-master 17:57:56
total 7432
drwxrwsr-x 3 999 spi 4096 Sep 21 17:58 .
drwxrwsr-- 6 root users 4096 Sep 21 16:36 ..
-rw-r--r-- 1 root spi 14 Sep 21 17:25 GitHubVersions
-rw-r--r-- 1 root spi 0 Sep 21 17:25 custom.list
-rw-r--r-- 1 root spi 618 Sep 21 17:25 dns-servers.conf
-rw-rw-r-- 1 999 spi 5558272 Sep 21 17:25 gravity.db
-rw-r--r-- 1 root spi 1940475 Sep 21 17:25 list.1.raw.githubusercontent.com.domains
-rw-rw-r-- 1 root spi 95 Sep 21 17:25 list.1.raw.githubusercontent.com.domains.sha1
-rw-r--r-- 1 root spi 48 Sep 21 17:25 local.list
-rw-r--r-- 1 root spi 20 Sep 21 17:50 localbranches
-rw-r--r-- 1 root spi 44 Sep 21 17:50 localversions
drwxrwsr-- 2 root spi 4096 Sep 21 17:25 migration_backup
-rw-r--r-- 1 999 spi 0 Sep 21 17:25 pihole-FTL.conf
-rw-r--r-- 1 root spi 53248 Sep 21 17:58 pihole-FTL.db
-rw-rw-r-- 1 root spi 254 Sep 21 17:25 setupVars.conf
-rw-rw-r-- 1 root spi 0 Sep 21 17:25 setupVars.conf.update.bak
ah, ok sorry... what happens if you connect to the container and you do a ls
there? like kubectl exec -ti pihole-XYZ -- /bin/sh
maybe it is just an issue with wrong file privileges?
# ls /etc/pihole
GitHubVersions dns-servers.conf list.1.raw.githubusercontent.com.domains local.list localversions pihole-FTL.conf setupVars.conf
custom.list gravity.db list.1.raw.githubusercontent.com.domains.sha1 localbranches migration_backup pihole-FTL.db setupVars.conf.update.bak
# ls -la /etc/pihole
total 7432
drwxrwsr-x 3 pihole pihole 4096 Sep 22 11:14 .
drwxr-xr-x 1 root root 4096 Sep 21 16:25 ..
-rw-r--r-- 1 root pihole 14 Sep 21 16:25 GitHubVersions
-rw-r--r-- 1 root pihole 0 Sep 21 16:25 custom.list
-rw-r--r-- 1 root pihole 618 Sep 21 16:25 dns-servers.conf
-rw-rw-r-- 1 pihole pihole 5558272 Sep 21 16:25 gravity.db
-rw-r--r-- 1 root pihole 1940475 Sep 21 16:25 list.1.raw.githubusercontent.com.domains
-rw-rw-r-- 1 root pihole 95 Sep 21 16:25 list.1.raw.githubusercontent.com.domains.sha1
-rw-r--r-- 1 root pihole 48 Sep 21 16:25 local.list
-rw-r--r-- 1 root pihole 20 Sep 22 11:10 localbranches
-rw-r--r-- 1 root pihole 44 Sep 22 11:10 localversions
drwxrwsr-- 2 root pihole 4096 Sep 21 16:25 migration_backup
-rw-r--r-- 1 pihole pihole 0 Sep 21 16:25 pihole-FTL.conf
-rw-r--r-- 1 root pihole 53248 Sep 22 11:14 pihole-FTL.db
-rw-rw-r-- 1 root pihole 254 Sep 21 16:25 setupVars.conf
-rw-rw-r-- 1 root pihole 0 Sep 21 16:25 setupVars.conf.update.bak
Hm... I have no idea... I'd try to change the ownership of setupVars.conf
to the user that is running the pihole service. But I have no idea, why the container complains about not finding the files. Maybe there is something in the pihole logs inside the container?
I checked all the logs and everything looks fine.
The error with setupVars.conf
is thrown in this line. So the container has no access to the file. Why this is? No idea... it must be a issue with the access rights or maybe with the file mount. Sorry, I have no idea why this is.
Thank you for your help. I will open a ticket on the pihole repo and then link it to here if anyone has the same problem as me.
Hey @MoJo2600, I just noticed this issue only happen if I write to an NFS share drive. If I write to the local drive the UI loads correctly.
Hm that sounds like #130 - maybe it helps?
Did it help?
Yes it did
I'm running a k3s cluster with one master node and one agent.
Hardware: 1 raspberry pi 3 and 1 raspberry pi 4
Loadbalancer: MetalLb
values.yml
Logs:
Any ideas?