Closed whyrusleeping closed 6 years ago
I have the memory profile, cpu profile, and stack dump from a recent out of memory panic, if you want to look at that for any debugging stuff.
Please send that along. There's a status function that you can use to determine what the state of the utp package is, https://godoc.org/github.com/anacrolix/utp#WriteStatus. If you have a running instance that has strange memory use, get the output of that (I chain it up to a status HTTP server that runs on on demand on localhost using github.com/anacrolix/envpprof), and send that too.
Assuming everything is working correctly, a spike in uTP activity will cause memory use to jump, and then Go's default GC collection strategy of waiting until memory doubles will mean you won't see a collection from sync.Pool allocations.
all the pprof files are here: http://mars.i.ipfs.io:8080/ipfs/QmVdkYjq2P9j7Ubr6DGP1Bt3bHFc3Mto84Lkkrwu7rmP4e
I'll work on getting the WriteStatus output soon.
I didn't grab those files while they were available. What's required to obtain them now?
@anacrolix try again, maybe hosting the files on the machine that was having the issues i filed this issue for wasnt the best idea >.>
Okay, got them, thanks.
I've since disabled, and just recently reenabled sendBufferPool. Did you notice anything in the interim(s)?
@anacrolix i havent paid super close attention lately. I'll update and try it out again soon and let you know how it goes.
Is there any update on this?
Take a look at https://github.com/anacrolix/go-libutp.
Reopen if there's fresh information.
I'm having some of my ipfs nodes die of OOM, and looking at the heap profile I see that utp was using 100MB (of my machines 256MB of RAM). It all appears to have been allocated by the
sendBufferPool
's allocation function. That seems a bit much, and i'm sure some of this is user error on my part. Any ideas how to debug the issue?