soheilhy / cmux

Connection multiplexer for GoLang: serve different services on the same port!
Apache License 2.0
2.53k stars 197 forks source link

cmux leaks memory over long term use #35

Closed chandradeepak closed 7 years ago

chandradeepak commented 7 years ago

the connection multiplexer leaks memory for long term usage. i did a pprof this is what i found. (cmux.serve) -> HTTP2MatchHeaderFieldSendSettings->matchHTTP2Field->ReadFrame->readFrameHeader->io.ReadFull->ReadAtLeast->bufferedReader.Read->s.buffer.Write->bytes.(*Buffer).grow->bytes.makeSlice

and have seen this slice keep increasing and this is inuse space i am talking about which doesn't get freed. so that is causing our application to use too much memory after couple of days it shuts down eventually.

Is there something we can do address this ?

i was thinking once we determine we have that setting frame or not we should release this buffer right . But for some reason it doesn't happen.

Thanks

puellanivis commented 7 years ago

I was looking into this before. bytes.Buffer never recovers any memory from information that has already been read. As a result, if you keep one laying around, and even use it for small writes, over time it will eventually get really big.

I had written a different Buffer class to deal with “once I read it, I never need to care about it ever again”.

chandradeepak commented 7 years ago

thanks for your answer. But i was looking at the pull request you had. I am on the latest branch. The bufferedReader which we give for cmux also uses bytes.Buffer. since in the buffer.reset we are doing s.bufferRead = 0. and we are reusing the same buffer again.

Say if we start our server with Mux.MatchWithWriters(cmux.HTTP2MatchHeaderFieldSendSettings("content-type", "application/grpc")) in the client when making connection request if send metadata which is large, but if we never send the connection frame application/grpc from client . Then there is a possibility that our bufferedReader.buffer we keep reading incoming frames and looking for the applicaiton/grpc frame and never finds it .

if we have multiple of these connections , we quickly fill the memory and that memory is never freed. so in turn we lead to application crash or out of memory error.

I have seen this happening .

soheilhy commented 7 years ago

Thanks for the report. I think it's because a lot of long-lived and parallel connections. Can you please try the following patch to see if this fixes the issue?

This hacky and results in one more allocation in the hot path. If it fixes your issue, I'll rewrite the buffer. The problem is that bytes.Buffer never releases the underlying slice and I'll have to reimplement it internally to be able to release the slice.

diff --git a/buffer.go b/buffer.go
index d05b199..8546708 100644
--- a/buffer.go
+++ b/buffer.go
@@ -26,7 +26,7 @@ import (
 // without allocating.
 type bufferedReader struct {
        source     io.Reader
-       buffer     bytes.Buffer
+       buffer     *bytes.Buffer
        bufferRead int
        bufferSize int
        sniffing   bool
@@ -42,6 +42,10 @@ func (s *bufferedReader) Read(p []byte) (int, error) {
                bn := copy(p, s.buffer.Bytes()[s.bufferRead:s.bufferSize])
                s.bufferRead += bn
                return bn, s.lastErr
+       } else if !s.sniffing && s.buffer != nil {
+               // If we are not sniffing and we don't need the buffe anymore,
+               // release the buffer.
+               s.buffer = nil
        }

        // If there is nothing more to return in the sniffed buffer, read from the
diff --git a/cmux.go b/cmux.go
index 9de6b0a..7407cca 100644
--- a/cmux.go
+++ b/cmux.go
@@ -15,6 +15,7 @@
 package cmux

 import (
+       "bytes"
        "fmt"
        "io"
        "net"
@@ -239,9 +240,10 @@ type MuxConn struct {
 }

 func newMuxConn(c net.Conn) *MuxConn {
+       buf := bytes.NewBuffer(nil)
        return &MuxConn{
                Conn: c,
-               buf:  bufferedReader{source: c},
+               buf:  bufferedReader{source: c, buffer: buf},
        }
 }
chandradeepak commented 7 years ago

Thanks for the fix. That helps a bit. But i still see the memory growth. so the code `} else if !s.sniffing && s.buffer != nil {

soheilhy commented 7 years ago

Unless the frame is corrupted, I can't quite understand what's going on in your case.

We can certainly put a limit on the maximum sniff size, but before doing so, I want to know how large can the buffer get in your case. Do you have any stats to share on 50p and 99p? Thanks.

chandradeepak commented 7 years ago

This happens for a grpc connection . I have a grpc server added as a listener to my cmux. We know java clients has to send the application/grpc header when they are connecting right .

if a client opens a connection , grpc.Dial will be sucessful. then he opens a stream to send data. he can send the actual payloads in that data. so if a java client doesn't send the "application/grpc" header and keeps sending data , then how long cmux will keep sniffing the incoming data.

We need some way to control what is the maximum sniff size or what is the time out before we error out and close the client connection.

i am able to simulate the case in which my buffer is getting close to 2GB or 3GB and if i open 1000 connections and each connection sending 100K messages each of size 1K . Ofcourse none of the clients could connect to internal grpc server since none of them set the proper header .

So if the client detects that he is not getting any response from server and close their connection we are good. With the new fix we reclaim the memory back. But if client doesn't close the connection , server goes high on memory and can eventually crash.

soheilhy commented 7 years ago

Thanks for the explanation. Sounds very plausible, and we should guard against such misbehaving clients. I'll send you a fix hopefully tomorrow.

chandradeepak commented 7 years ago

Thanks . In the serve code if "matched" is never set to true there is no stopping of how much data you read. and if we come out of for loops either by having a time out or max size that will be good. This would prevent how much we can read from client before we close his connection . only breaking the loops will reclaim the memory right.

This would prevent the rouge client from spamming or bringing the server down . since cmux is the first one sitting and outside facing for clients.

Also is there a way we can send to client that he is not sending the correct header and ask him to send that. we should have a handler function which we can configure so that the error can be sent back to client and he takes necessary action . Is there a way to do that ?

My understanding of http2 is that it sends the header frames first before the payload so the matching should happen fairly quick . Thus the timeout option.

`func (m cMux) serve(c net.Conn, donec <-chan struct{}, wg sync.WaitGroup) { defer wg.Done()

muc := newMuxConn(c)
if m.readTimeout > noTimeout {
    _ = c.SetReadDeadline(time.Now().Add(m.readTimeout))
}
for _, sl := range m.sls {
    for _, s := range sl.ss {
        matched := s(muc.Conn, muc.startSniffing())
        if matched {
            fmt.Println("done sniffing called")
            muc.doneSniffing()
            if m.readTimeout > noTimeout {
                _ = c.SetReadDeadline(time.Time{})
            }
            select {
            case sl.l.connc <- muc:
            case <-donec:
                _ = c.Close()
            }
            return
        }
    }
}
fmt.Println("close called")
_ = c.Close()
//muc.buf.buffer.Reset()
//muc.buf.Close()
err := ErrNotMatched{c: c}
if !m.handleErr(err) {
    fmt.Println("root clsoed")
    _ = m.root.Close()
}

}`

soheilhy commented 7 years ago

Unless you have a client that is attacking your server I expect the loop to break very quickly. SETTINGS and HEADER frames should be among the very first ones and they connection payload should be very small.

There is also a new SetReadTimeout in cmux which you can use to set the read timeout. I'll add a max sniffing threshold to guard against your case, but a gRPC client shouldn't really cause the issue you're explaining.

Also is there a way we can send to client that he is not sending the correct header and ask him to send that.

I'm not sure how you want to do that, but you can write your own MatchWriter instead of a Matcher to write to the client.

chandradeepak commented 7 years ago

What does SetReadTimeout do. It will only set the timeout if we don't get any data from the connection . Does that help in this case ?

So if i have my own MatchWriter if the matched condition is not met should we need have function like matchHTTP2Field and the have code which Write to writer what we need to send to client .

`func matchHTTP2Field(w io.Writer, r io.Reader, name, value string) (matched bool)

w.Write("messgae to be sent to client")`

so that that is return back to the client ?

soheilhy commented 7 years ago

Does that help in this case ?

I'm not really sure what your case is. :-) If your clients are writing a lot of data continuously then no. If your clients send huge frames but pause to receive some response then yes.

so that that is return back to the client ?

Yes, but I don't really recommend using unless there is an absolute need. It can break the protocol and it's very unsafe.

chandradeepak commented 7 years ago

Yes our case is where clients lot of data continuously .

chandradeepak commented 7 years ago

one question on how we process matchHTTP2Field. I was looking at the http2 spec. I understand the case case *http2.HeadersFrame: where if get the header frame and we expect our headers to be present in the frame. If not we need to return false. The other case i observed is i was getting some data frames as well which technically can fill the buffer before the connection can happen.

So is it safe to assume that we should get Header frames which has the required header before the application can start sending data. So can we have a case in the switch below like

case *http2.DataFrame: return false

Please let me know

`func matchHTTP2Field(w io.Writer, r io.Reader, name, value string) (matched bool) { if !hasHTTP2Preface(r) { return false }

framer := http2.NewFramer(w, r)
hdec := hpack.NewDecoder(uint32(4<<10), func(hf hpack.HeaderField) {
    if hf.Name == name && hf.Value == value {
        matched = true
    }
})
for {
    f, err := framer.ReadFrame()
    if err != nil {
        fmt.Println(err)
        return false
    }

    switch f := f.(type) {
    case *http2.SettingsFrame:
        if err := framer.WriteSettings(); err != nil {
            return false
        }
    case *http2.HeadersFrame:
        if _, err := hdec.Write(f.HeaderBlockFragment()); err != nil {
            return false
        }
        if matched {
            return true
        }

        if f.FrameHeader.Flags&http2.FlagHeadersEndHeaders != 0 {
            return false
        }
    case *http2.DataFrame:
        return  false       
}

}`

chandradeepak commented 7 years ago

Any updates on the fix for thresholding ?

soheilhy commented 7 years ago

Haven't had the time to work on the issue, but it just occurred to me that the leak may actually be because of an invalid logic in the matcher. We are basically not processing CONTINUATION frames properly.

Can you please try the follow patch? I have a strong hunch that it should fix your issue.

diff --git a/matchers.go b/matchers.go
index 2e7428f..07cf31c 100644
--- a/matchers.go
+++ b/matchers.go
@@ -145,11 +145,7 @@ func matchHTTP2Field(w io.Writer, r io.Reader, name, value string) (matched bool
        }

        framer := http2.NewFramer(w, r)
-       hdec := hpack.NewDecoder(uint32(4<<10), func(hf hpack.HeaderField) {
-               if hf.Name == name && hf.Value == value {
-                       matched = true
-               }
-       })
+       framer.ReadMetaHeaders = hpack.NewDecoder(maxHTTPRead, nil)
        for {
                f, err := framer.ReadFrame()
                if err != nil {
@@ -161,17 +157,15 @@ func matchHTTP2Field(w io.Writer, r io.Reader, name, value string) (matched bool
                        if err := framer.WriteSettings(); err != nil {
                                return false
                        }
-               case *http2.HeadersFrame:
-                       if _, err := hdec.Write(f.HeaderBlockFragment()); err != nil {
-                               return false
-                       }
-                       if matched {
-                               return true
-                       }
-
-                       if f.FrameHeader.Flags&http2.FlagHeadersEndHeaders != 0 {
-                               return false
+               case *http2.GoAwayFrame:
+                       return false
+               case *http2.MetaHeadersFrame:
+                       for _, f := range f.RegularFields() {
+                               if f.Name == name && f.Value == value {
+                                       return true
+                               }
                        }
+                       return false
                }
        }
 }
chandradeepak commented 7 years ago

with the new code i see the happy path is accumulating memory and not getting released. I will do more tests and let you know.

soheilhy commented 7 years ago

Thank you and I'll try to send you the limit patch over the weekend. :-)

chandradeepak commented 7 years ago

Hello soheilhy, i made a small client and server where i am trying to reproduce the behavior.

https://github.com/chandradeepak/memgrowth

it has client and server. As you can see i am simulation the server with grpcl := m.Match(cmux.HTTP2HeaderField("content-type", "application/grpc1"))

and client doesnt send the application/grpc

Also i am starting a pprof . so you can get pprof profile as well.

you can run pprof in server directory as

go tool pprof --inuse_space server http://localhost:8000/debug/pprof/heap?debug=1

and then press top

it would give some thing like this . if the clients are not able to connect they should just close the connection not increase memory. Please let me know if you can reproduce it .

Thanks for your help.

526.02MB of 528.02MB total (99.62%) Dropped 20 nodes (cum <= 2.64MB) Showing top 10 nodes out of 14 (cum >= 519.41MB) flat flat% sum% cum cum% 520.94MB 98.66% 98.66% 520.94MB 98.66% bytes.makeSlice 5.08MB 0.96% 99.62% 5.08MB 0.96% golang.org/x/net/http2.NewFramer.func1 0 0% 99.62% 520.94MB 98.66% bytes.(*Buffer).Write 0 0% 99.62% 520.94MB 98.66% bytes.(*Buffer).grow 0 0% 99.62% 514.33MB 97.41% github.com/soheilhy/cmux.(*bufferedReader).Read 0 0% 99.62% 527.02MB 99.81% github.com/soheilhy/cmux.(*cMux).serve 0 0% 99.62% 527.02MB 99.81% github.com/soheilhy/cmux.HTTP2HeaderField.func1 0 0% 99.62% 527.02MB 99.81% github.com/soheilhy/cmux.matchHTTP2Field 0 0% 99.62% 527.02MB 99.81% github.com/soheilhy/cmux.matchersToMatchWriters.func1 0 0% 99.62% 519.41MB 98.37% golang.org/x/net/http2.(*Framer).ReadFrame

soheilhy commented 7 years ago

Thanks for the testcase. Well, it's a critical bug in the cmux matcher not really fixable by limiting the size.

Please try dev/fix-mem-grow to see if it fixes the problem on your server. AFAICT, it fixes the testcase you provided.

chandradeepak commented 7 years ago

Thanks Soheilhy , That fixed the negative test case . i see there is no memory growth there . I am testing the positive scenario where i still see growth . But that memory growth is temporary and soon is garbage collected as soon as clients disconnects or stop sending data. But still it is possible to bring server down on load if too many connections with too much data because of out of memory.

Do you think this memory growth because of the slice is normal ?

And i see this is not directly related to cmux. it is coming from other . Basically i have token which is 1MB in size which is sent with the connection context. I do 1000 of those requests. so 1GB memory growth is fine. and it is not coming from cmux. i guess basically after cmux forwards the connection to actual grpc server which is reading that data. hence nothing to do with cmux.

I attached the pprof screenshot for your review. Please let me know your thoughts .

`Showing top 10 nodes out of 61 (cum >= 29.55MB) flat flat% sum% cum cum% 1304.40MB 60.36% 60.36% 1304.40MB 60.36% bytes.makeSlice

612.95MB 28.36% 88.72% 871.26MB 40.32% golang.org/x/net/http2/hpack.(*Decoder).readString

54.14MB 2.51% 91.23% 54.14MB 2.51% crypto/tls.(*block).reserve `

screen shot 2016-09-25 at 1 01 19 am
soheilhy commented 7 years ago

AFAICT, this is normal but we can optimize further by releasing the buffer when we don't need it.

I stacked a small commit on top of fix-mem-grow https://github.com/soheilhy/cmux/commits/dev/fix-mem-grow

Can you please try this one for your load test? I think this should make it much more resilient.

chandradeepak commented 7 years ago

I tried your fix. But i have never seen the function called. I am trying to understand when that happens , may be it is not related to cmux since i see growth else where as per the pprof . But the other fix works. i guess we can push that to master.

soheilhy commented 7 years ago

Yeah it was broken. I just pushed the fixed version.

soheilhy commented 7 years ago

This is merged into master. Please reopen if your issue isn't fixed on master.