Open perfectra1n opened 1 year ago
Personally I've just been mainitaining my own custom values.yml
file that has my preffered dnsmasq settings:
Doing this you can perist your custom settings between deployment plus ideal for automating the helm deployment.
customDnsEntries:
- address=/s1.mydomain.cloud/10.10.10.123
- address=/s2.mydomain.cloud/10.10.10.124
- address=/s3.mydomain.cloud/10.10.10.125
- address=/s4.mydomain.cloud/10.10.10.126
Although you are right this will not show in the GUI console fustratingly, it will at least make them persist between deployments.
Alternatively, you can use the environment variable PIHOLE_DNS_
if you which to externalise the local DNS mechcanisim for Pihole or conitnue to use a NFS lol.
Just seen you wanted CNAME not A record lol.
Ok, good point personally haven't hit this issue. Curiously, what's the usecase a ingress
class?
Looking through the spec, there's option todo this the same way I proposed for A records.
have you tried this in the values.yml
dnsmasq:
customCnameEntries::
# Here we specify custom cname entries that should point to `A` records or
# elements in customDnsEntries array.
# The format should be:
- cname=cname.foo.bar,foo.bar
- cname=cname.bar.foo,bar.foo
Gotchya, I prefer to manage CNAME / A records via the GUI so that I don't have them publicly facing (public GitOps repo) and the A records are already saved across restarts, I thought it would be useful to have CNAME's records persisted as well to match user expectations.
Fare point, if its the public facing element that concerns you and you're not overally bothered by the cosmetics, you could just supply a different value file that's not in public source control at runtime.
Understood, however as a user, if I make a change in the GUI, I would expect that change to persist across "restarts". Since A records do, I would expect CNAME records to as well.
Maybe we should make sure to persist all the data to one pvc when persistence is enabled. I don't think it makes sense to have multiple pvc.
Fair, I could make that happen. I’ll submit a PR for it :)
My workaround for this was to mount both /etc/pihole
and /etc/dnsmasq.d
as subpaths from the config
PVC. If this is done in the chart, I guess it will be a breaking change for those of us using PVC as persistence.
Should I make I PR for this solution to fix this issue? My assumption is that many users use the CNAME functionality from GUI in homelab as it's quite volatile, if you are constantly introducing new stuff.
You're more than welcome to do the PR, I manage it as such (which I think is the same as you):
extraVolumes:
dnsmasq-conf:
persistentVolumeClaim:
claimName: pihole-dnsmasq-pvc
extraVolumeMounts:
dnsmasq-conf:
mountPath: /etc/dnsmasq.d
Still feels hacky, and behaviour that replicates that of a stateful set. I've personally addressed this problem in my fork by automating the deployment of dnsmaq records in the values.yaml via github actions into my kubenettes cluster. I don't want to start a ClickOps vs GitOps discussion but doing it via yaml feels cleaner.
On Thu, 19 Sept 2024 at 01:03, perfectra1n @.***> wrote:
You're more than welcome to do the PR, I manage it as such (which I think is the same as you):
extraVolumes: dnsmasq-conf: persistentVolumeClaim: claimName: pihole-dnsmasq-pvc extraVolumeMounts: dnsmasq-conf: mountPath: /etc/dnsmasq.d
— Reply to this email directly, view it on GitHub https://github.com/MoJo2600/pihole-kubernetes/issues/269#issuecomment-2359648637, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB544CSIBT5GIRVIFMHWRB3ZXIID3AVCNFSM6AAAAAA6URSGGKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGNJZGY2DQNRTG4 . You are receiving this because you commented.Message ID: @.***>
If you define CNAMEs via the GUI instead of the chart's values, as the file located at
/etc/dnsmasq.d/05-pihole-custom-cname.conf
isn't backed by any PVC, the file disappears.I'm happy to make a PR, but I was curious what you thought the best way to tackle the issue was - I've just mounted an additional NFS PVC at
/etc/dnsmasq.d/
for now lol.