Closed roehnan closed 1 year ago
Hi @roehnan, thanks for reporting this!
I think I know where the problem is. While I work on a fix, may I ask that you try running Dragonfly with an added flag: --version_check=false
and see if this is a sufficient workaround until a fix is in place?
Hello @chakaz. I've added the --version_check=false
flag and the pod now starts correctly. It looks like using this flag as a workaround will be good for now.
Fantastic, I'm glad we found a workaround. As I mentioned, I would like to fix it so that a workaround is not needed. May I kindly ask that you run this command inside the container and paste the output here?
grep nameserver /etc/resolv.conf
The reason I ask this is because the crash stack shows that something unusual happens while resolving DNS, like that it uses multiple sockets in parallel to do the lookup. I wonder what causes this. Thanks!
grep nameserver /etc/resolv.conf
nameserver 172.29.32.2
Describe the bug Dragonfly DB v1.4.0 was installed in a test kubernetes cluster via helm. The created pod crashes with a
Check failed: state->channel_sock == socket_fd
error indns_resolve.cc:88
.v1.3.0 did not exhibit this behavior when installed in the same cluster.
To Reproduce Install v1.4.0 via helm
Expected behavior Dragonfly DB pod starts successfully.
Environment (please complete the following information):
Reproducible Code Snippet Dragonfly DB installed via helm:
Contents of
myvals.yaml
:Additional context Here are the logs from the crashing pod: