Open betabandido opened 5 months ago
Hi @betabandido. Before we flag this as a memory leak, we need to inform you that as a feature of the go agent, transactions are stored on logs until that transaction ends. Transactions that run for a long time will accumulate all sorts of memory, but logs happen to be very large. We do this this way because of sampling in the agent. The weight of a sampled transaction is not calculated until it is finished, so rather than risk dropping critical data, we hold onto it. All logs associated with a transaction are stored on board until that transaction ends. This is currently a design requirement of all agents.
Thanks @iamemilio ! That makes sense. Yet, it seems we have none of the following:
Therefore, I would expect StoreLog
to consume a certain amount of memory, but not to increase its memory consumption over time.
I collected some data on how long the transactions for this particular app are with:
SELECT average(duration * 1000), percentile(duration * 1000, 99), max(duration * 1000)
FROM Transaction
WHERE appName = '<our app name>'
SINCE 7 days ago
This is what I got:
11.7 ms (avg)
78 ms (99th percentile)
3360 ms (max)
No really long-lasting transactions here.
For the number of transactions, I used:
SELECT count(*)
FROM Transaction
WHERE appName = '<our app name>'
SINCE 7 days ago
TIMESERIES 1 hour
and I got the following chart:
There is a cyclic pattern, but I cannot see an ever-growing number of transactions between April 26th and 29th.
I acknowledge this might be a difficult issue to debug without a reproducible example. But, while we try to come up with one, please let us know how else we can help you gather more data to pinpoint the root cause.
@iamemilio This morning I just got some more data with pprof. Memory consumption in StoreLog
went from ~80 MB to ~100 MB (see diagram below). Eventually the pods will get killed, and the process will start over again.
Hi @betabandido. Thanks for the profiling data and bringing this to our attention (especially the last couple of updates!). We will continue investigating this, but it may take a while to identify. If a reproducible example is possible, that would be greatly appreciated and will help us pinpoint the issue faster. We'll keep monitoring this thread in the meanwhile.
Thanks!
Hi @iamemilio @mirackara. We have come up with a reproducible example that we can share. What would be the preferred way to do so?
Hi @betabandido. A public repository would be great if possible.
If you would prefer not to share code publicly, then please get in touch with New Relic support and request assistance with the Go agent. An agent will handle your case and we will be able to communicate privately from there. That being said, the issue is already public, so as long as the reproducer is not sensitive to you, then we don't mind it being posted somewhere public, or just inline in this thread even.
@mirackara @iamemilio I have created this repo: https://github.com/betabandido/nrml
I've also added some instructions that hopefully will help. Please let me know if you have any doubts or questions.
Hi @betabandido
First off, huge kudos to you + team. This reproducer is very detailed and will be a massive help for us. We don't have an ETA on this yet, but we are tracking this internally and will give the investigation the time it deserves. If we have any updates/questions we'll post them in this thread.
Thank You!
FYI, we're actively working on this issue and will update this when we have a solution.
@nr-swilloughby Thanks for the update! Does this mean you have identified the issue and are in the process of solving it?
Description
We have several Go applications that use the New Relic Go agent, and they work well. But there is a particular one that seems to be experiencing what looks like a memory leak. The setup for the agent is very similar in these applications. This might suggest that the problem is elsewhere, but we have ran Go's pprof tool and everything points to an issue with the NR Go agent. We're trying to create a reproducible example, but in the meantime we wanted to create this issue, and add to it as much information as possible, with the hope of setting the wheels turning to find a solution.
Steps to Reproduce
We cannot reproduce the issue outside of this particular application (yet).
Expected Behavior
Memory should not be increasing over time.
NR Diag results
The following diagram shows the memory usage for the application running on our kubernetes cluster.
As the image shows, memory keeps increasing until either we deploy a new release or the kubernetes cluster kills the app because of consuming too much memory.
Your Environment
A Go application compiled using Go 1.21, running on a kubernetes cluster (EKS 1.27).
New Relic Go agent version 3.32.0.
Reproduction case
We are working on trying to reproduce the issue.
Additional context
Here you can see two outputs from pprof. The first one is from last Friday (Apr 26th), and the second one is from today (Apr 29th). As you can see the heap memory in use within
StoreLog
method has doubled (from ~40 MB to ~80 MB).Apr 26th
Apr 29th