Closed yuko11 closed 6 years ago
@yuko11
Traffic coming to any worker node is processed by tectonic ingress
how are you routing traffic? You expect to be able to hit worker-node-ip/my-route and be properly redirected?
I imagine that if you define a hostname in the ingress configuration but then don't use it when you attempt to connect the nginx-ingress-controller won't route properly.
@kbrwn Thanks, Kyle! Yes, I expect that I can do curl -H "Host:myhost.local" http://192.168.1.x where 192.168.1.x is IP of any worker regardless if there is application pod scheduled there or not, and that request reach application
I'm wondering if we need ingress controller deployed in each namespace in Tectonic. Most probably that's the reason of the issue.
Ingress controller nodes show logs: "upstream timed out (110: Connection timed out) while connecting to upstream". It appears that I can't do curl from ingress-controller pod into application pod
Issue is due to network policies. But not really clear how to allow traffic from ingress controller to service pods, which going between Nodes (in case initially request came to Node without application Pod and was source NATed).
Issue Report Template
Tectonic Version
1.8.4-tectonic.3
Environment
BareMetal What hardware/cloud provider/hypervisor is being used with Tectonic? VMware
Expected Behavior
There is following setup Ingress -> Service (ClisterIP) -> Pods Traffic coming to any worker node is processed by tectonic ingress (which seems to be nginx ingress) Tectonic ingress forward traffic to destination pod (doesn't matter on what Node it is scheduled ).
Actual Behavior
Traffic is forwarded to pod only in case it is coming into Node with scheduled pod. If traffic coming to Node where no Pod related to service specified as backed in Ingress configuration, we get 504 Gateway Time-out
Reproduction Steps
Other Information
As per Nginx ingress controller documentation: The NGINX ingress controller does not use Services to route traffic to the pods. Instead it uses the Endpoints API in order to bypass kube-proxy to allow NGINX features like session affinity and custom load balancing algorithms. It also removes some overhead, such as conntrack entries for iptables DNAT.
Im wondering how to troubleshoot that, or am I understand how it suppose to work in a wrong way.