Closed rhysh closed 5 years ago
/cc @bradfitz
@fraenkel, another MaxConnsPerHost issue. (sorry)
(The other is #34978).
As written, the test case as a data race. I changed the test case slightly:
ForceAttemptHTTP2
to simplifiy the setupThe failure does still occur but I can get a few successful runs. There is obviously some book keeping issue.
It is the same for 1.13.3 and tip (46aa8354fa)
package issue34941
import (
"context"
"crypto/tls"
"net/http"
"net/http/httptest"
"sync"
"sync/atomic"
"testing"
"time"
)
func TestMaxConns(t *testing.T) {
totalRequests := 300
allow := make(chan struct{})
var (
starts int64
finishes int64
)
h := func(w http.ResponseWriter, r *http.Request) {
if !r.ProtoAtLeast(2, 0) {
t.Errorf("Request is not http/2: %q", r.Proto)
return
}
atomic.AddInt64(&starts, 1)
<-allow
}
s := httptest.NewUnstartedServer(http.HandlerFunc(h))
s.TLS = &tls.Config{
NextProtos: []string{"h2"},
}
s.StartTLS()
defer s.Close()
transport := s.Client().Transport.(*http.Transport)
// clientConfig := transport.TLSClientConfig
// transport.TLSClientConfig = nil
transport.MaxConnsPerHost = 1
transport.ForceAttemptHTTP2 = true
// make a request to trigger HTTP/2 autoconfiguration
// resp, err := s.Client().Get(s.URL)
// if err == nil {
// resp.Body.Close()
// }
// now allow the client to connect to the ad-hoc test server
// transport.TLSClientConfig.RootCAs = clientConfig.RootCAs
ctx := context.Background()
ctx, cancel := context.WithCancel(ctx)
defer cancel()
var wg sync.WaitGroup
for i := 0; i < totalRequests; i++ {
req, err := http.NewRequest("GET", s.URL, nil)
if err != nil {
t.Fatalf("NewRequest: %s", err)
}
wg.Add(1)
go func() {
defer wg.Done()
ctx, cancel := context.WithCancel(ctx)
defer cancel()
req = req.WithContext(ctx)
resp, err := s.Client().Do(req)
if err != nil {
return
}
resp.Body.Close()
atomic.AddInt64(&finishes, 1)
}()
}
for i := 0; i < 10; i++ {
if i == 5 {
close(allow)
}
time.Sleep(100 * time.Millisecond)
t.Logf("starts=%d finishes=%d", atomic.LoadInt64(&starts), atomic.LoadInt64(&finishes))
}
if have, want := atomic.LoadInt64(&starts), int64(totalRequests); have != want {
t.Errorf("HTTP/2 requests started: %d != %d", have, want)
}
if have, want := atomic.LoadInt64(&finishes), int64(totalRequests); have != want {
t.Errorf("HTTP/2 requests completed: %d != %d", have, want)
}
}
A fix is coming shortly. We cannot blindly decrement the conn count. We need to only decrement if we have removed the idle connection.
Bah. A simple tweak to our existing test for MaxConnsPerHost has uncovered yet another issue. Just tweak the loop to 300 and tip will hit the same issue reported. My fix will fix it but now I have
--- FAIL: TestTransportMaxConnsPerHost (0.06s)
transport_test.go:658: round 1: too many dials (http2): 2 != 1
transport_test.go:661: round 1: too many get connections (http2): 2 != 1
transport_test.go:664: round 1: too many tls handshakes (http2): 2 != 1
All I have determined at this point is that the http2 side starts sending back http.http2noCachedConnError
after some time. The test passes when it doesn't occur which is not very often.
Change https://golang.org/cl/202087 mentions this issue: net/http: only decrement connection count if we removed a connection
Part of the issue is the connection coordination between http and http2. When using 300 clients, the first 50+ compete to create the connection on the http side before the http2 side is aware of it. There is some glitch (still investigating) when the http2 side is aware, it actually causes a second connection. By changing the test case to first do a single client request, the failure rate is greatly reduced but the problem still occurs. However, it shows that we will always reuse the first connection but may dial/tls handshake a second. I don't think this will ever be perfect but I would at least like to better understand what is causing the second connection. We may need to relax the test case slightly.
The http2 server imposes a limit of 250 concurrent streams. Once we reach that number which does happen when we stampede with 300 requests, there is a point where a new connection is created.
There is still a certain chance to crash with "internal error: connCount underflow" on golang 1.13.4 when go client enables HTTP2.
The "connCount underflow" error did not appeared when using golang 1.13.3
Can you also provide the stack trace? And the potential scenario if different than reported. A new issue would be best.
As written, the test case as a data race. I changed the test case slightly:
- Fixing the data race, (transport.MaxConnsHost)
- using
ForceAttemptHTTP2
to simplifiy the setup- count successful finishes
The failure does still occur but I can get a few successful runs. There is obviously some book keeping issue.
It is the same for 1.13.3 and tip (46aa835)
package issue34941 import ( "context" "crypto/tls" "net/http" "net/http/httptest" "sync" "sync/atomic" "testing" "time" ) func TestMaxConns(t *testing.T) { totalRequests := 300 allow := make(chan struct{}) var ( starts int64 finishes int64 ) h := func(w http.ResponseWriter, r *http.Request) { if !r.ProtoAtLeast(2, 0) { t.Errorf("Request is not http/2: %q", r.Proto) return } atomic.AddInt64(&starts, 1) <-allow } s := httptest.NewUnstartedServer(http.HandlerFunc(h)) s.TLS = &tls.Config{ NextProtos: []string{"h2"}, } s.StartTLS() defer s.Close() transport := s.Client().Transport.(*http.Transport) // clientConfig := transport.TLSClientConfig // transport.TLSClientConfig = nil transport.MaxConnsPerHost = 1 transport.ForceAttemptHTTP2 = true // make a request to trigger HTTP/2 autoconfiguration // resp, err := s.Client().Get(s.URL) // if err == nil { // resp.Body.Close() // } // now allow the client to connect to the ad-hoc test server // transport.TLSClientConfig.RootCAs = clientConfig.RootCAs ctx := context.Background() ctx, cancel := context.WithCancel(ctx) defer cancel() var wg sync.WaitGroup for i := 0; i < totalRequests; i++ { req, err := http.NewRequest("GET", s.URL, nil) if err != nil { t.Fatalf("NewRequest: %s", err) } wg.Add(1) go func() { defer wg.Done() ctx, cancel := context.WithCancel(ctx) defer cancel() req = req.WithContext(ctx) resp, err := s.Client().Do(req) if err != nil { return } resp.Body.Close() atomic.AddInt64(&finishes, 1) }() } for i := 0; i < 10; i++ { if i == 5 { close(allow) } time.Sleep(100 * time.Millisecond) t.Logf("starts=%d finishes=%d", atomic.LoadInt64(&starts), atomic.LoadInt64(&finishes)) } if have, want := atomic.LoadInt64(&starts), int64(totalRequests); have != want { t.Errorf("HTTP/2 requests started: %d != %d", have, want) } if have, want := atomic.LoadInt64(&finishes), int64(totalRequests); have != want { t.Errorf("HTTP/2 requests completed: %d != %d", have, want) } }
This test still fails on golang 1.13.4.
The smaller the value of transport.MaxConnsPerHost
, the greater the probability that the test program will crash with panic: net/http: internal error: connCount underflow
.
panic: net/http: internal error: connCount underflow
goroutine 119 [running]:
net/http.(*Transport).decConnsPerHost(0xc0000e2780, 0x0, 0x0, 0xc0000b4780, 0x5, 0xc00020e010, 0xf, 0x0)
/usr/local/opt/go/libexec/src/net/http/transport.go:1334 +0x604
net/http.(*Transport).roundTrip(0xc0000e2780, 0xc000346a00, 0x0, 0xc000185b08, 0x100e7b8)
/usr/local/opt/go/libexec/src/net/http/transport.go:546 +0x77f
net/http.(*Transport).RoundTrip(0xc0000e2780, 0xc000346a00, 0xc0000e2780, 0x0, 0x0)
/usr/local/opt/go/libexec/src/net/http/roundtrip.go:17 +0x35
net/http.send(0xc000346a00, 0x1593fe0, 0xc0000e2780, 0x0, 0x0, 0x0, 0xc0001be2b0, 0xc0001dbcd0, 0x1, 0x0)
/usr/local/opt/go/libexec/src/net/http/client.go:250 +0x443
net/http.(*Client).send(0xc0000a18f0, 0xc000346a00, 0x0, 0x0, 0x0, 0xc0001be2b0, 0x0, 0x1, 0x10)
/usr/local/opt/go/libexec/src/net/http/client.go:174 +0xfa
net/http.(*Client).do(0xc0000a18f0, 0xc000346a00, 0x0, 0x0, 0x0)
/usr/local/opt/go/libexec/src/net/http/client.go:641 +0x3ce
net/http.(*Client).Do(...)
/usr/local/opt/go/libexec/src/net/http/client.go:509
test.TestMaxConns.func2(0xc0000ab210, 0xc00009db00, 0xc0000a6358, 0xc0000a8480, 0xc0000aa848)
/Volumes/data/code/go/src/test/main_test.go:68 +0x1ad
created by test.TestMaxConns
/Volumes/data/code/go/src/test/main_test.go:63 +0x3e4
The fix only went into 1.14. The powers that be would have to decide if it gets backported.
Why isn't this backported? Maybe we need to ping someone? It's clearly a bug and regression in 1.13. 1.14 should be released fairly soon but that's a major release with tons of changes (the runtime in particular) that may come with its own bag of problems.
cc @toothrot @dmitshur re backport
@gopherbot please backport to 1.13
This is a regression, as documented in the issue comments.
Backport issue(s) opened: #36583 (for 1.13).
Remember to create the cherry-pick CL(s) as soon as the patch is submitted to master, according to https://golang.org/wiki/MinorReleases.
I fear that this issue is not resolved completely:
In Go version go1.13.6 darwin/amd64
I still get
panic: net/http: internal error: connCount underflow goroutine 247 [running]: net/http.(*Transport).decConnsPerHost(0xc0000963c0, 0x0, 0x0, 0xc00038caf0, 0x5, 0xc0000f9660, 0x11, 0x0) /usr/local/go/src/net/http/transport.go:1334 +0x604 net/http.(*Transport).roundTrip(0xc0000963c0, 0xc000312e00, 0xc000374600, 0xc0001e040c, 0xc0001e0450) /usr/local/go/src/net/http/transport.go:546 +0x77f net/http.(*Transport).RoundTrip(0xc0000963c0, 0xc000312e00, 0xc0000963c0, 0xbf8140581cdfd498, 0xe2c22493c) /usr/local/go/src/net/http/roundtrip.go:17 +0x35 net/http.send(0xc000312c00, 0x1356060, 0xc0000963c0, 0xbf8140581cdfd498, 0xe2c22493c, 0x15254e0, 0xc0000fc1f8, 0xbf8140581cdfd498, 0x1, 0x0) /usr/local/go/src/net/http/client.go:250 +0x443 net/http.(*Client).send(0xc000082c00, 0xc000312c00, 0xbf8140581cdfd498, 0xe2c22493c, 0x15254e0, 0xc0000fc1f8, 0x0, 0x1, 0x0) /usr/local/go/src/net/http/client.go:174 +0xfa net/http.(*Client).do(0xc000082c00, 0xc000312c00, 0x0, 0x0, 0x0) /usr/local/go/src/net/http/client.go:641 +0x3ce net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:509 main.main.func2.1(0xc000082c00, 0xc0001c94c0, 0xc000312c00) /Users/michaeldorner/Code/Go/src/main.go:80 +0x4d created by main.main.func2 /Users/michaeldorner/Code/Go/src/main.go:79 +0x8c exit status 2
Lines 1332 and 1333 in https://golang.org/src/net/http/transport.go explain that this
Shouldn't happen, but if it does, the counting is buggy and could easily lead to a silent deadlock, so report the problem loudly.
which I do hereby. :)
Please let me know if I shall create a new issue.
@michaeldorner This has not be backported to 1.13 yet.
@fraenkel Oh sorry, my bad. Thanks for the fast reply.
I'm still getting internal error: connCount underflow
sometimes on 1.14.1.
@andybalholm can you attach the stack trace? Is there anything special in your usage?
Here is the stack trace:
panic: net/http: internal error: connCount underflow
goroutine 2114 [running]:
net/http.(*Transport).decConnsPerHost(0x1094ca0, 0x0, 0x0, 0xc001f110b5, 0x4, 0xc001a27320, 0x14, 0x0)
/usr/local/go/src/net/http/transport.go:1391 +0x590
net/http.(*persistConn).closeLocked(0xc0000cd320, 0xb8a620, 0xc000096b60)
/usr/local/go/src/net/http/transport.go:2584 +0xf5
net/http.(*persistConn).close(0xc0000cd320, 0xb8a620, 0xc000096b60)
/usr/local/go/src/net/http/transport.go:2574 +0x83
net/http.(*persistConn).readLoop.func1(0xc0000cd320, 0xc0073b4d60)
/usr/local/go/src/net/http/transport.go:1946 +0x41
net/http.(*persistConn).readLoop(0xc0000cd320)
/usr/local/go/src/net/http/transport.go:2122 +0x12ae
created by net/http.(*Transport).dialConn
/usr/local/go/src/net/http/transport.go:1647 +0xc56
The usage is in a forward proxy server (github.com/andybalholm/redwood). We have about 2000 users running through the server I have been watching these errors on the most closely. So the usage is heavy, and probably about as close to random as you are likely to find.
I thought that switching to two Transports (one for HTTP/1.1 and one for HTTP/2) instead of one http.Transport
that was configured to automatically switch protocols would get rid of the panics. But it didn't seem to help. (So it seems that the panic doesn't depend on having HTTP/2 enabled on the Transport.)
I had Transport.MaxConnsPerHost
set to 8. Setting it to 0 (unlimited) made the panics go away.
@andybalholm Thanks for the stacktrace. Please open a separate issue to track this. It is a different path than the one fixed.
What version of Go are you using (
go version
)?Does this issue reproduce with the latest release?
Yes: The test hangs with Go 1.13. The test panics with tip.
What operating system and processor architecture are you using (
go env
)?go env
OutputWhat did you do?
Using an
http.Client
with anhttp.Transport
configured to use http/2 and withMaxConnsPerHost = 1
, I started a large number of concurrent http/2 requests: more than the server would allow on a single TCP connection.In the test environment, I did this by having the server delay its responses until I knew a large number of requests were either active on the connection or waiting for permission to use the connection, at which point I allowed the
http.Handler
to respond.What did you expect to see?
I expected that every request I started would eventually be seen by the server, and that the client would receive the server's response.
What did you see instead?
In Go 1.11 where the default behavior is to use a single TCP connection for each http/2 authority, the requests queue up and once I allow the handler to return they are all processed. That's fine in this context.
In Go 1.12 where the http.Transport is correctly able to use more than one TCP connection for each http/2 authority (but doesn't yet know how to use
MaxConnsPerHost
), it passes all of the requests on to the server immediately. That's also fine in this context.In Go 1.13, the server only sees the first 249 requests. The others appear stuck indefinitely. I see that as a bug.
In tip, the Transport panics. That's a second bug.
And for context, the passing results with Go 1.11 and Go 1.12: