Closed MarkusTeufelberger closed 7 years ago
Just to clarify: I fixed this issue by turning off the IPv6 DNS server manually and rebooting the nodes, they are running fine now. Still something that others might run into and that should be kept in mind for the next version (which hopefully already contains this fix).
does this issue still persist in version 1.5.3 ? am trying to create a troubleshooting section.
awww its not public :(
@mfburnett Should close. Fixed in https://github.com/coreos-inc/tectonic/issues/1534
Tectonic 1.5.5-tectonic.3 now uses the Nginx Ingress Controller 0.9.0-beta.3
. This should include the upstream fix.
Thanks. :-)
Issue Report Template
Tectonic Version
1.5.2
Environment
What hardware/cloud provider/hypervisor is being used with Tectonic? Provisioner in an LXD container, 1 Worker/1 Controller in VirtualBox VMs
Expected Behavior
Tectonic console is readily available a few minutes after reaching the "connect nodes" stage in the installer.
Actual Behavior
It seems like I am hitting https://github.com/kubernetes/contrib/issues/2188 at the "connect nodes" stage of the installer.
After investigating a bit, the controller node seems to run just fine, but on the worker node 2 containers (in docker ps -a) constantly are restarted: gcr.io/google_containers/nginx-ingress-controller:0.8.3 and quay.io/coreos/tectonic-console:v0.9.1
Looking at the logs for the nginx container shows that it has the following error, before killing the container (xxxx instead of actual IPv6 address):
The tectonics-console container just seems to sync its config for a while (which fails, since nginx won't serve any content there), then time out and die.
Reproduction Steps
Other Information
A workaround seems to be to just turn off IPv6 DNS or wait until Tectonics ships with an updated nginx ingress controller, since the issue seems to already have been fixed upstream.