Closed uhop closed 1 month ago
@matthewvalentine if you have time could you double-check the master again?
@uhop I am not seeing memory issues per se, but on both master and 1.21.1, but not on 1.20.12, there are segfaults in re2.node
.
The segfaults are semi-reliable with a particular large, non-RE2-specific test suite I have, they happen frequently but not at quite the same place and stop happening as often if I try to narrow down the tests to something more specific that I could share.
The segfaults look like this, with different addresses and amounts of ??? entries, this one is against master:
PID 33882 received SIGSEGV for address: 0x2
0 segfault-handler.node 0x0000000109c05184 _ZL16segfault_handleriP9__siginfoPv + 296
1 libsystem_platform.dylib 0x0000000181537584 _sigtramp + 56
2 re2.node 0x0000000109c633e8 _ZN10WrappedRE24ExecERKN3Nan20FunctionCallbackInfoIN2v85ValueEEE + 1996
3 re2.node 0x0000000109c5b7ac _ZN3Nan3impL23FunctionCallbackWrapperERKN2v820FunctionCallbackInfoINS1_5ValueEEE + 160
4 node 0x0000000105201a34 Builtins_CallApiCallback + 212
5 ??? 0x000000010a833d1c 0x0 + 4471340316
6 ??? 0x000000010a81027c 0x0 + 4471194236
7 ??? 0x000000010a814568 0x0 + 4471211368
8 ??? 0x000000010a80d4b4 0x0 + 4471182516
9 ??? 0x000000010a81151c 0x0 + 4471199004
10 ??? 0x000000010a81167c 0x0 + 4471199356
11 ??? 0x000000010a8166f4 0x0 + 4471219956
12 ??? 0x000000010a81685c 0x0 + 4471220316
13 ??? 0x000000010a812094 0x0 + 4471201940
14 ??? 0x000000010a80f118 0x0 + 4471189784
15 ??? 0x000000010a81641c 0x0 + 4471219228
16 ??? 0x000000010a816564 0x0 + 4471219556
17 ??? 0x000000010a80d828 0x0 + 4471183400
18 ??? 0x000000010a811c6c 0x0 + 4471200876
19 ??? 0x000000010a80f118 0x0 + 4471189784
20 ??? 0x000000010a8192c0 0x0 + 4471231168
21 ??? 0x000000010a81ea28 0x0 + 4471253544
22 node 0x0000000105231ef4 Builtins_AsyncFunctionAwaitResolveClosure + 84
23 node 0x00000001052c0838 Builtins_PromiseFulfillReactionJob + 56
24 node 0x0000000105223c4c Builtins_RunMicrotasks + 588
25 node 0x00000001051fe3a4 Builtins_JSRunMicrotasksEntry + 164
26 node 0x0000000104b3092c _ZN2v88internal12_GLOBAL__N_16InvokeEPNS0_7IsolateERKNS1_12InvokeParamsE + 2680
27 node 0x0000000104b30e1c _ZN2v88internal12_GLOBAL__N_118InvokeWithTryCatchEPNS0_7IsolateERKNS1_12InvokeParamsE + 88
28 node 0x0000000104b30ff8 _ZN2v88internal9Execution16TryRunMicrotasksEPNS0_7IsolateEPNS0_14MicrotaskQueueEPNS0_11MaybeHandleINS0_6ObjectEEE + 64
29 node 0x0000000104b577e4 _ZN2v88internal14MicrotaskQueue13RunMicrotasksEPNS0_7IsolateE + 336
30 node 0x0000000104b57f80 _ZN2v88internal14MicrotaskQueue17PerformCheckpointEPNS_7IsolateE + 124
31 node 0x00000001047f0c4c _ZN4node21InternalCallbackScope5CloseEv + 240
Thank you, it is a good actionable info. I think I know the reason and can fix it.
I fixed the problem drastically and committed the result to the repo. Please check the master, if you have time to double-check me. I'll continue testing.
Hang on, I think I found a problem with replace()
.
Hang on, I think I found a problem with
replace()
.
Fixed the stupid bug when an empty replacement didn't work. Literally: s.replace(re2, '')
.
It was a regression introduced by the recent commit.
I have run memory-monitor for 24 hours and it looks good. The memory shows typical GC behavior: it started growing fast, but over time the growth slowed down and eventually became constant. This was confirmed with system tools.
Unless someone sees something bad, it'll be released soon.
After 60 hours — no leaks.
Released as 1.21.2.
Splitting an idea from https://github.com/uhop/node-re2/issues/210#issuecomment-2148694235:
It will be the next release, so we are not holding a fix.