rlmcpherson / s3gof3r

Fast, concurrent, streaming access to Amazon S3, including gof3r, a CLI. http://godoc.org/github.com/rlmcpherson/s3gof3r
MIT License
1.14k stars 180 forks source link

Memory Leak s3gof3r/pool.go #129

Open fakarakas opened 6 years ago

fakarakas commented 6 years ago

Hello, I am currently writing a webservice to centralize the upload to S3 for different applications. This webservice is using s3gof3r for the upload and download.

I run some stress test with 20 concurrent uploads of 100 MB and i got a large memory usage.

The issue seems to come from pool.go, is there anyway to tweak the API for a less memory consumption ? Thanks

pprof output

Dropped 76 nodes (cum <= 0.01GB)
      flat  flat%   sum%        cum   cum%
    2.34GB 99.71% 99.71%     2.34GB 99.71%  s3proxy/vendor/github.com/rlmcpherson/s3gof3r.bufferPool.func1 /Users/fatih/go/src/s3proxy/vendor/github.com/rlmcpherson/s3gof3r/pool.go
ROUTINE ======================== s3proxy/vendor/github.com/rlmcpherson/s3gof3r.bufferPool.func1 in /Users/fatih/go/src/s3proxy/vendor/github.com/rlmcpherson/s3gof3r/pool.go
    2.34GB     2.34GB (flat, cum) 99.71% of Total
         .          .     30:   }
         .          .     31:   go func() {
         .          .     32:       q := new(list.List)
         .          .     33:       for {
         .          .     34:           if q.Len() == 0 {
    2.34GB     2.34GB     35:               q.PushFront(qb{when: time.Now(), s: make([]byte, bufsz)})
         .          .     36:               sp.makes++
         .          .     37:           }
         .          .     38:
         .          .     39:           e := q.Front()
         .          .     40:
fakarakas commented 6 years ago

Anyway, The problem was coming from our part when we were parsing the multipart form with the file. To decrease the memory usage we tweaked this buffer :

    c.Request.ParseMultipartForm(2 << 20); // here 2MB, default is 32 MB in net.http
abhimanyubabbar commented 6 years ago

@fakarakas : Should you close this issue ?

bcongdon commented 6 years ago

For what it's worth, I'm experiencing this same issue

blmoore commented 5 years ago

I'm seeing this specifically when uploading a large number of small files (000s). Can reduce the impact by reducing concurrency + part size but still seeing memory use grow indefinitely

MaerF0x0 commented 5 years ago

+1 Also seeing in our production code Screen Shot 2019-08-29 at 2 45 55 PM

kd7lxl commented 3 years ago

We were seeing behavior matching all these symptoms, and it turned out there was a reader we forgot to call .Close() on. Anyone who is hitting this, look really closely to see if you are leaking unclosed readers or writers.

https://github.com/rlmcpherson/s3gof3r/blob/864ae0bf7cf2e20c0002b7ea17f4d84fec1abc14/s3gof3r.go#L113