Open olix0r opened 7 years ago
Just to add some more information about the setup - k8s (v1.4.8) cluster was started through kops
, 1 master + 1 node, default kubenet
networking setup and the default Debian image.
I had similar 502s on 0.8.6 the day before. Here's a 502 example:
org.jboss.netty.channel.ConnectTimeoutException: connection timed out: /100.96.1.106:4141. Remote Info: Upstream Address: /10.123.45.1:60473, Upstream Client Id: Not Available, Downstream Address: /100.96.1.106:4141, Downstream Client Id: %/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx, Trace Id: 37448a58fff5eeb9.37448a58fff5eeb9<:37448a58fff5eeb9
@aalimovs reports that he's seeing 502s when trying to do a simple test against a kubernetes l2l setup on AWS t2.large instances.
Using this configuration: https://github.com/BuoyantIO/linkerd-examples/blob/master/k8s-daemonset/k8s/linkerd-zipkin.yml
metrics.json snapshot
What especially stands out here is that the client's
/closes
metric is more than twice that of/connects
-- I assume that/closes
counts failed connections and/connects
does not?We have disabled telemetry, usage reporting, etc, with no improvement. Why can't we keep connections established?