Open amedee-os opened 1 year ago
Nice catch. We're seeing this also.
Yeah, shipped that workaround but it's not working for us, I'm still seeing this. I guess I'm going to need to actually start digging a bit deeper.
You can see its already ticking up at the end. Pre adding the library this service was stable memory wise
The drop on 3/29 is from us doubling the container limits.
Are your instances running serverside in a long running mode process (like express or something)?
we might be able to just disable this functionality on the serverside. It’s meant for clients that usually have short & independent sessions
Hi, thanks for analyticsjs!
When using it on in a long-running server calling
trackEvent
after every request, the memory usage of the process grows forever. This shows the process's growing memory use until OOM killed. (On ~1/31 we deployed the fix below).We found that the
state.history
variable of the trackReducer keeps a log of every single trackEvent sent. Adding this line after our trackEvent() call made the memory issue go away:When reading the code, I can't seem to find where
state.history
is used, or if it's for external code like plugins to read. In our use case we never read from it, so disabling it would be best, or keeping it capped at a small N if it's used for retries.I noticed the trackEvent history code has the comment
//Todo prevent LARGE arrays
. Could we add some kind of config to set a max history length, or to turn it off entirely? Or hard code a limit like 10k?This might also help memory usage for browser users of analyticsjs with long running sessions.
Would you take a PR to fix this? Which approach would you prefer?