Open arctica opened 3 years ago
I am still experiencing memory leaks after properly disposing of V8Go's Isolate.
I think I can fix this without breaking API compatibility. Here are my notes (work in progress):
- New C++ class ValueScope that owns a set of
Persistent<Value>
handles.- Every C++ Context owns a C++ ValueScope that lives as long as it does, replacing the current
vector
of persistent handles.- The C++ functions that create Values register the handles with the Context's current ValueScope.
So far this just duplicates the current behavior. But now:
- Give Context a stack of ValueScopes, not just a single one.
- New Go struct ValueScope, bridged to the C++ class. Its finalizer deletes the C++ object, releasing the handles. The struct can also be invalidated explicitly to free the handles immediately.
Context.PushScope() *ValueScope
creates a new ValueScope and attaches it to the C++ Context. It also calls the C++Context::Enter()
method to make this the Isolate's current Context.Context.PopScope(ValueScope*)
detaches the scope from the context (or panics if it's not current) and callsContext::Exit
.This preserves the current behavior, so it's not a breaking change. But it lets you do:
{ scope := context.PushScope() defer context.PopScope(scope) val1 := v8.NewString(context.Isolate(), "foo") // ... }
val1
's C++ persistent handle will be created in the temporary scope and freed when the scope is popped. So this code does not leave any allocations behind.
The issue is still exists.
2023/11/15 21:32:57 WARN v8go & golang GC res="{\"value\":\"empty\",\"type\":\"test\"}" i=32000000
2023/11/15 21:33:15 WARN v8go & golang GC res="{\"value\":\"empty\",\"type\":\"test\"}" i=33000000
<--- Last few GCs --->
[88587:0x158008000] 192919 ms: Scavenge (reduce) 1396.2 (1446.5) -> 1396.0 (1446.5) MB, 3.5 / 0.0 ms (average mu = 0.104, current mu = 0.062) allocation failure
[88587:0x158008000] 192934 ms: Scavenge (reduce) 1396.5 (1446.5) -> 1396.2 (1446.7) MB, 2.3 / 0.0 ms (average mu = 0.104, current mu = 0.062) allocation failure
[88587:0x158008000] 192951 ms: Scavenge (reduce) 1396.8 (1446.7) -> 1396.6 (1447.0) MB, 2.3 / 0.0 ms (average mu = 0.104, current mu = 0.062) allocation failure
<--- JS stacktrace --->
#
# Fatal javascript OOM in Ineffective mark-compacts near heap limit
#
SIGTRAP: trace trap
PC=0x104b61efc m=7 sigcode=0
signal arrived during cgo execution
goroutine 1 [syscall]:
runtime.cgocall(0x104b44bfc, 0x1400005bcc8)
/usr/local/go/src/runtime/cgocall.go:157 +0x44 fp=0x1400005bc90 sp=0x1400005bc50 pc=0x10495b514
github.com/couchbasedeps/v8go._Cfunc_RunScript(0x152e28190, 0x15409eb60, 0x19, 0x15409eb80, 0x7)
_cgo_gotypes.go:1451 +0x38 fp=0x1400005bcc0 sp=0x1400005bc90 pc=0x104b11698
github.com/couchbasedeps/v8go.(*Context).RunScript.func3(0x1058ac5b7?, 0x7?, {0x1058b1123?, 0x19}, 0x1400001f800?, {0x1058ac5b7?, 0x7})
/Users/peter/go/pkg/mod/github.com/couchbasedeps/v8go@v1.7.4/context.go:79 +0x88 fp=0x1400005bd40 sp=0x1400005bcc0 pc=0x104b144c8
github.com/couchbasedeps/v8go.(*Context).RunScript(0x1400000e468, {0x1058b1123, 0x19}, {0x1058ac5b7, 0x7})
/Users/peter/go/pkg/mod/github.com/couchbasedeps/v8go@v1.7.4/context.go:79 +0xd0 fp=0x1400005bdf0 sp=0x1400005bd40 pc=0x104b14330
main.runJavascript({0x1058b1123?, 0x19?})
Using the print function from the examples (even with the actual fmt.Printf() commented out) seems to result in a memory leak.
Slightly modified example from the Readme to call the print function in an infinite loop:
Warning: Running this will quickly consume all memory of the machine.
Removing the 'foo' parameter of the print() call seems to stop the leak. So I guess the parameters of the callback are somehow leaking.