Closed smcquay closed 6 years ago
I had this exact behavior, and it is perfectly reproducible. I ended up noticing that NodePort was acting funny, and in fact was the same behavior described here https://github.com/kubernetes/kubernetes/issues/58908
Running sudo iptables -P FORWARD ACCEPT
on every node fixes that issue and it also resolves the problem at the ELB. Should this be added to the CF template to make sure that Services work as expected after the Quick Start?
This one may belong in wardroom.
/cc @craigtracey
This is really bad -- this hit me during a demo today.
@jbeda poked all the right people on the upstream issue, looks like it's rooted there.
k, this comment leads me to think a change in the networking setup may have occurred - https://github.com/kubernetes/kubernetes/issues/58908#issuecomment-364302823
As I was digging through the scripts, config, and manifests that we are applying, I think it might be related to a mismatch in the default pod network configuration between kubeadm and calico, I'm working on validating this now.
@detiber: how do I verify this change? I just tried building a cluster using the template here:
and I'm still seeing the same pathology.
How would I know when this merge has hit "latest"?
I was following along in the expose a service tutorial and am seeing bad performance through the
ELB
. I'm ready to admit (and hopefully find) a configuration issue with my AWS account (I've looked and not found anything immediately wrong though), but I'm seeing incredibly bad latency getting http responses from a deployed http server.I've placed the configuration here for validation. It's composed of a single deployment and a service of type
LoadBalancer
.When I apply those files to my cluster I get the appropriate number of pods, and a single service, that is eventually populated as:
When I try to hit that url though I get one good, fast response:
And then
curl
times out after a minute of waiting:When I hit the service from within the cluster I get expected (great) performance:
What's odd is that if I scale down to 1 pod: