owntracks / recorder

Store and access data published by OwnTracks apps
Other
894 stars 123 forks source link

High memory usage with views #464

Closed tobru closed 5 months ago

tobru commented 6 months ago

Recently, I discovered that the memory usage of the recorder grows considerably when using views. I have a few views and when viewing them and reloading the browser a few times, memory usage can grow to over 1GiB (and then OOM kicks in, my Kubernetes Pod has this as a limit). It also doesn't go down automatically, only when restarting the container.

Is this a known issue? How can I help to debug that?

jpmens commented 6 months ago

This is not a known issue, so thanks for the report.

Obviously this should not be happening. It would be very practical if you could at least narrow down how considerable "considerably" is.

If you restart recorder and look at memory consumption: how does that change when reading a view? And a second time? and a tenth time?

tobru commented 6 months ago

This is what I could come up with right now:

Screenshot_20240515_100639

After starting the recorder, it is around 50 MiB. Loading two different views a few times (~10 times), I end up at ~450 MiB.

jpmens commented 5 months ago

Can you please describe how large, i.e. how many points, these views have approximately?

There is leakage, but for a view which loads this geojson file which has 154,071 bytes in it I'm seeing a totalk of 44 bytes leaked in the whole ot-recorder program after one invocation which isn't too bad but which is nowhere near what you are reporting.

However, this does massively increase after the second refresh of the view. The number in the first column indicates refreshes of the Web browser view:

1: 44 leaks for 1872 total leaked bytes.
2: 219891 leaks for 8591664 total leaked bytes.
3: 439766 leaks for 17182592 total leaked bytes.
4: 659642 leaks for 25773456 total leaked bytes.
10: 1319365 leaks for 51550080 total leaked bytes.

Determined using

$ leaks -atExit -- ./ot-recorder -S JP --host 127.0.0.1 --port 1883 --http-host 127.0.0.1 --http-port 8083 'owntracks/#'

and pkill ot-recorder to stop the process.

I'm running at 1/4 steam at the moment, so please don't hold your breath for now, but we'll work on this: it's most definitely a bug.

jpmens commented 5 months ago

We've found the large leakage which was ithe view construction. After 50 refreshes on the Mexican view I'm now at

50: 1514 leaks for 59888 total leaked bytes.

I need to find better tooling, but will commit these changes now.

jpmens commented 5 months ago

I think we've now got it. After 50 refreshes:

50: 14 leaks for 688 total leaked bytes.
jpmens commented 5 months ago

So just to be sure, I've subjected rendering of these views to valgrind, and the result is as expected.

Firstly, OwnTracks Recorder will never win a prize for a low count of allocations, but we knew that because of the JSON routines we use.

==9138== HEAP SUMMARY:
==9138==     in use at exit: 2,500 bytes in 36 blocks
==9138==   total heap usage: 8,719,020 allocs, 8,718,984 frees, 357,395,375 bytes allocate

I've checked that the leaks we do have at program exit are valid, i.e. correspond to small buffers which are allocated to hold, say, path names, and aren't explicitly freed. As such, the result is legitimate:

==9138== LEAK SUMMARY:
==9138==    definitely lost: 174 bytes in 11 blocks
==9138==    indirectly lost: 300 bytes in 3 blocks
==9138==      possibly lost: 0 bytes in 0 blocks
==9138==    still reachable: 2,026 bytes in 22 blocks
==9138==         suppressed: 0 bytes in 0 blocks

Note that these are results after loading Recorder, refreshing the "Mexico" view 20 times, and then exiting the Recorder.