Open waj opened 11 years ago
@waj can you provide the version of Erlang/OTP you are using?
I'm experiencing the same behavior on my end (R16B01 and R15B03, OS X and Linux respectively).
run(0) -> ok;
run(N) ->
{ok, Ctx} = js_driver:new(),
true = js_driver:destroy(Ctx),
run(N - 1).
Running run(10000)
a few times will quickly eat up all of my memory. Closing the shell will free it up again.
Any solution available?
I tried this in Erlang R16B02 on OS X and was not able to reproduce. So I just went all in with this function.
run(N) ->
case N rem 10000 of
0 -> io:format("~p vms~n", [N]);
_ -> ok
end,
{ok, Ctx} = js_driver:new(),
true = js_driver:destroy(Ctx),
run(N + 1).
which I figured was sure to expose the issue; however, it was only able to use at most 1GB of memory, and it reclaimed memory, just not as quickly as I'd have expected it to.
Even still, this is a contrived example that really isn't a good use case. Is there some example of an application that this is causing a problem for? If there's no further activity on this issue, we'll consider a fix not necessary.
I did some tests on various platforms (Centos, OSX) and Erlang releases - same effect = memory leaks.
Please try to make simple test on the OS X:
MallocStackLogging=1 erl -pa .....
{ok, JS} = js_driver:new().
js_driver:destroy(JS).
leaks
command on the shell using beam.smp PID:leaks <PID>
You should see memory leaks with call stack like this:
Process: beam.smp [3627]
Path: /opt/erlang/r16b03/erts-5.10.4/bin/beam.smp
Load Address: 0x10f848000
Identifier: beam.smp
Version: 0
Code Type: X86-64
Parent Process: zsh [74734]
Date/Time: 2014-03-19 08:51:27.414 +0100
OS Version: Mac OS X 10.9.2 (13C64)
Report Version: 7
leaks Report Version: 2.0
Process 3627: 1386 nodes malloced for 7737 KB
Process 3627: 121 leaks for 16096 total leaked bytes.
Leak: 0x7f8e5be03780 size=96 zone: DefaultMallocZone_0x10fd83000
0x00000001 0x00000000 0x13b43ee0 0x00000001 .........>......
0x0000004e 0x00000000 0x5be02b40 0x00007f8e N.......@+.[....
0x00000000 0x00000000 0x00000000 0x00000000 ................
0x00000000 0x00000000 0x0fdee000 0x00000001 ................
0x000001f0 0x00001a00 0x0000002a 0x00000000 ........*.......
0x5be066c0 0x00007f8e 0x5d02c3c8 0x00007f8e .f.[.......]....
Call stack: [thread 0x111603000]: | erts_port_output | process spidermonkey_drv.c:230 | sm_initialize spidermonkey.c:158 | js_NewObjectWithGivenProto | js_NewScope | JS_malloc | malloc | malloc_zone_malloc
Leak: 0x7f8e5be037e0 size=32 zone: DefaultMallocZone_0x10fd83000
0x00000009 0x00000000 0x0fded000 0x00000001 ................
0x80000001 0xffffffff 0x80000001 0xffffffff ................
Call stack: [thread 0x111603000]: | erts_port_output | process spidermonkey_drv.c:230 | sm_initialize spidermonkey.c:159 | JS_InitStandardClasses | js_InitFunctionAndObjectClasses | js_InitFunctionClass | JS_InitClass | js_SetClassPrototype | js_DefineProperty | js_DefineNativeProperty | js_AddScopeProperty | js_AllocSlot | js_ReallocSlots | JS_realloc | realloc | malloc_zone_malloc
Leak: 0x7f8e5be03800 size=32 zone: DefaultMallocZone_0x10fd83000
0x00000009 0x00000000 0x0fdee040 0x00000001 ........@.......
0x80000001 0xffffffff 0x80000001 0xffffffff ................
Call stack: [thread 0x111603000]: | erts_port_output | process spidermonkey_drv.c:230 | sm_initialize spidermonkey.c:159 | JS_InitStandardClasses | js_InitFunctionAndObjectClasses | js_InitObjectClass | JS_InitClass | js_SetClassPrototype | js_DefineProperty | js_DefineNativeProperty | js_AddScopeProperty | js_AllocSlot | js_ReallocSlots | JS_realloc | realloc | malloc_zone_malloc
Leak: 0x7f8e5be03830 size=64 zone: DefaultMallocZone_0x10fd83000
...
...
I have a suspicion that the JS_DestroyContext
function doesn't perform properly garbage collection of the memory being used by cx's global object.
I have upgraded the spidermonkey to 1.8.5 and it seems to work properly now.
I'm having the same issue:
Linux shive.r #1 SMP Tue Nov 4 15:43:28 CET 2014 armv7l GNU/Linux
Erlang/OTP 17 [erts-6.1] [source] [smp:4:4] [async-threads:10] [kernel-poll:false]
js-1.8.0-rc1.tar.gz
.I've attached the memory graph seen by Munin.
Maybe I'm doing something wrong, but if I continuously create and destroy JavaScript VM instances the process memory increases all the time:
Any thoughts? Should I do this in some other way?