Closed abagonhishead closed 3 years ago
Unfortunately this has been happening for a while now. I have tried to fix it, but couldn't seem to get it working correctly even with two volumes.
My branch is up to date, and I have fixes for toml file. However when I deploy to GCE it doesn't work. It doesn't seem to like ipv4 only machines... Trying to sort that out now.
I now have a fully working Kube deployment again. https://github.com/zquestz/dnscrypt-server-docker/commit/f9f9985c72adf05b3e2208a1a9c31d0dc7b61c4d
Hey -- feel free to tell me if I've missed something in documentation somewhere and I'll close this.
The example Kubernetes deployment doesn't appear to store config/state between containers/pods, only keys. This means that the deployment fails to launch after the init job runs, as the entrypoint is checking that
/opt/encrypted-dns/etc/encrypted-dns.toml
exists duringstart
. Persisting all of/opt/encrypted-dns/etc
doesn't work as a workaround because it looks like thekeys
subdirectory is getting mounted somewhere inside the container (I'm assuming intentionally?), in addition to the fact that theinit
section of the entrypoint needsencrypted-dns.toml.in
which is hidden by the mounted volume. I'm a Kubernetes novice but getting this to work appears to need two separate persistent volumes, one for/opt/encrypted-dns/etc
and one for/opt/encrypted-dns/etc/keys
, both of which need claiming by both the job & the deployment. Then,encrypted-dns.toml.in
needs copying from the container to theetc
persistent volume before the init job runs. Is there a better way of doing this?Let me know if I'm missing something or if you need more info. I can raise a PR for the second volume & claim if needed.
Cheers.