Open allnvrgs opened 3 years ago
I tried dividing the tests into 3 different newman run with 'async' so they are running in parallel but i also see the issue. The issue happens on iteration 550+ which is also around 1700+ mark as reported above if you multiply it by 3.
Hey, @allnvrgs can you help me to duplicate this error. Providing some dummy data collection. I'll look into this.
Hi @nimit2801 ,
Thanks for your reply. Please see attached zip file.
While trying to reproduce the issue in easiest way, i deduced that the issue may have something to do in a big environment variable. I have an environment variable that stores all the json schemas of all our endpoints (~approx 301KB). When I try to removed most of the schemas in this file and just leave the schema that i need for the test run, i was not able to reproduce the issue.
To reproduce the issue on your machine:
newman run Service_TAF.postman_collection.json -e "service_taf_env Copy.postman_environment.json" -d combinedalltest.json
Out of memory
error happens on 800+ iteration.Thanks and regards, Allen
@nimit2801 I've been encountering this same problem on a regular basis for weeks.
I'm using newman as a tool to interface with a third party vendor who has no API and the most stateful web server I've ever seen (it's a JSF server). I interface with them in 2 different ways:
method="post"
file upload form
.I moved to newman in an effort to get away from headless browsers (resource intense and while their server works well, their client is frequently having issues) by reverse engineering their HTTP calls and using environment variables to talk to the server in the incredibly stateful way it expects.
newman
Usage DetailsBecause their server is JSF, I cannot simply issue the file upload HTTP POST request with some metadata to upload a photo. Instead, I have to issue all the requests that you would typically issue while navigating their site, in the correct order. When I upload a photo, it's attached to the correct work order not because I've specified it in the request but because the server statefully knows that I'm on that work order's page and am uploading a photo.
newman
to Solve ThisI've basically created a folder in postman for every action I might need to take on their website. They keep track of "which state is me" via a session ID but also via a viewState
token that they use to differentiate "are you on this browser tab or that one?" but also changes every time you move from one page to another. So I've created an object which holds the environment variables for each newman but which tracks the changes to environment variables and persists across newman calls. I run anywhere from 4-12 instances of my process concurrently, depending on need.
events.defaultMaxListeners = 20 // perhaps relevant?
const envVars = new EnvVars()
const cookies = new CookieJar()
// NewmanRunner is a class that wraps newman. It's basically just a way to setup and modify `NewmanRunOptions` before pulling the trigger. It also wraps the `EventEmitter` that `newman.run` returns with a custom one that forwards newman events and also emits custom ones. But I `.removeAllListeners` on both emitters at the end of these events: "assertion", "exception", "done"
await new NewmanRunner('go to home page', envVars, cookies).run() // 'go to home page' is one of the folders in postman
envVars.set({workOrderId})
await new NewmanRunner('navigate to work order', envVars, cookies).run()
await new NewmanRunner('load photos tab', envVars, cookies).run()
const photos = await Photos.fetchForWorkOrder(workOrderId)
await Bluebird.map(
photos,
async photo => {
const photoUploadEnvVars = envVars.clone()
photoUploadEnvVars.set({uploadFilePath: photo}) // it is NOT this easy because of how Postman works. I have to dynamically modify a copy of the collection json for each of these
await new NewmanRunner('upload photo', photoUploadEnvVars, cookies).run()
},
{ concurrency: 5 }
)
I think I may have rubber ducked a bit here. I just realized that, due to how some of the clean up code is written, EventEmitter#removeAllListeners
may not be called at the end of every newman.run
. This memory problem happens perhaps once a day so I'm going to go ahead and modify the code to make certain that removeAllListeners
is called on both the newman and my EventEmitters.
My assumption up until I noticed the above was that it's not ditching these photos after uploading them. I know how garbage collection works and the like. I'm not holding on to any reference to the photos.
I'm going to go ahead and run the modified code (perhaps reduce the default heap size) and post the error if it crashes or come back and verify that it was my fault if it doesn't.
I ran it with --heap-prof --max-old-space-size=1000
. Really wish these crash reports were a bit more useful. If anyone who is more experienced with this could lend me a hand, I'd much appreciate it. It's very rare for me to run into this kind of issue (and I've built some pretty heavy node apps over the past 10 years).
"large_object_space": {
"memorySize": 1953595392,
"committedMemory": 1953595392,
"capacity": 1948451240,
"used": 1948451240,
"available": 0
},
It appears that this is what's killing my heap. Based on this documentation, this memory space is for unusually large javascript objects. My collection.json
is 61kb. So you would need ~32k copies of it to get to 1948451240
. I'm definitely not making that many "special newman runs" before this crashes (special newman runs being those where I deep clone collection.json and modify some part of it - mainly to populate form data).
I'm thinking this must be a result of the event emitter objects. Most of these requests spit out giant chunks of XML, which are stored in duplicate in the event data that newman outputs. I'm also suspecting that I'm causing this issue by using newmanEventEmitter.removeAllListeners()
inside the done
event. Perhaps newman uses subsequent events for clean up purposes?
Here's the crash output and heap profile:
<--- Last few GCs --->
[59206:0x108008000] 2739998 ms: Mark-sweep (reduce) 992.3 (1004.4) -> 992.0 (1004.9) MB, 52.3 / 0.0 ms (average mu = 0.138, current mu = 0.095) allocation failure GC in old space requested
[59206:0x108008000] 2740043 ms: Mark-sweep (reduce) 993.0 (1003.9) -> 992.4 (1005.2) MB, 38.0 / 0.0 ms (average mu = 0.148, current mu = 0.160) allocation failure scavenge might not succeed
<--- JS stacktrace --->
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
1: 0x1012e4da5 node::Abort() (.cold.1) [/usr/local/bin/node]
2: 0x1000a6239 node::Abort() [/usr/local/bin/node]
3: 0x1000a639f node::OnFatalError(char const*, char const*) [/usr/local/bin/node]
4: 0x1001e9057 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [/usr/local/bin/node]
5: 0x1001e8ff3 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/usr/local/bin/node]
6: 0x100397ee5 v8::internal::Heap::FatalProcessOutOfMemory(char const*) [/usr/local/bin/node]
7: 0x1003999aa v8::internal::Heap::RecomputeLimits(v8::internal::GarbageCollector) [/usr/local/bin/node]
8: 0x100395079 v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [/usr/local/bin/node]
9: 0x100392911 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/usr/local/bin/node]
10: 0x1003a11aa v8::internal::Heap::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/usr/local/bin/node]
11: 0x1003a1231 v8::internal::Heap::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/usr/local/bin/node]
12: 0x100367fc0 v8::internal::FactoryBase<v8::internal::Factory>::NewFixedArrayWithFiller(v8::internal::Handle<v8::internal::Map>, int, v8::internal::Handle<v8::internal::Oddball>, v8::internal::AllocationType) [/usr/local/bin/node]
13: 0x10049d99a v8::internal::Handle<v8::internal::FixedArray> v8::internal::interpreter::ConstantArrayBuilder::ToFixedArray<v8::internal::Isolate>(v8::internal::Isolate*) [/usr/local/bin/node]
14: 0x10047ba44 v8::internal::Handle<v8::internal::BytecodeArray> v8::internal::interpreter::BytecodeArrayWriter::ToBytecodeArray<v8::internal::Isolate>(v8::internal::Isolate*, int, int, v8::internal::Handle<v8::internal::ByteArray>) [/usr/local/bin/node]
15: 0x10047e6fd v8::internal::Handle<v8::internal::BytecodeArray> v8::internal::interpreter::BytecodeGenerator::FinalizeBytecode<v8::internal::Isolate>(v8::internal::Isolate*, v8::internal::Handle<v8::internal::Script>) [/usr/local/bin/node]
16: 0x1004a1ace v8::internal::CompilationJob::Status v8::internal::interpreter::InterpreterCompilationJob::DoFinalizeJobImpl<v8::internal::Isolate>(v8::internal::Handle<v8::internal::SharedFunctionInfo>, v8::internal::Isolate*) [/usr/local/bin/node]
17: 0x1004a18d5 v8::internal::interpreter::InterpreterCompilationJob::FinalizeJobImpl(v8::internal::Handle<v8::internal::SharedFunctionInfo>, v8::internal::Isolate*) [/usr/local/bin/node]
18: 0x1002b7d2d v8::internal::CompilationJob::Status v8::internal::(anonymous namespace)::FinalizeSingleUnoptimizedCompilationJob<v8::internal::Isolate>(v8::internal::UnoptimizedCompilationJob*, v8::internal::Handle<v8::internal::SharedFunctionInfo>, v8::internal::Isolate*, std::__1::vector<v8::internal::FinalizeUnoptimizedCompilationData, std::__1::allocator<v8::internal::FinalizeUnoptimizedCompilationData> >*) [/usr/local/bin/node]
19: 0x1002b0b84 v8::internal::(anonymous namespace)::IterativelyExecuteAndFinalizeUnoptimizedCompilationJobs(v8::internal::Isolate*, v8::internal::Handle<v8::internal::SharedFunctionInfo>, v8::internal::Handle<v8::internal::Script>, v8::internal::ParseInfo*, v8::internal::AccountingAllocator*, v8::internal::IsCompiledScope*, std::__1::vector<v8::internal::FinalizeUnoptimizedCompilationData, std::__1::allocator<v8::internal::FinalizeUnoptimizedCompilationData> >*) [/usr/local/bin/node]
20: 0x1002b075f v8::internal::Compiler::Compile(v8::internal::Handle<v8::internal::SharedFunctionInfo>, v8::internal::Compiler::ClearExceptionFlag, v8::internal::IsCompiledScope*) [/usr/local/bin/node]
21: 0x1002b1721 v8::internal::Compiler::Compile(v8::internal::Handle<v8::internal::JSFunction>, v8::internal::Compiler::ClearExceptionFlag, v8::internal::IsCompiledScope*) [/usr/local/bin/node]
22: 0x1006d4138 v8::internal::Runtime_CompileLazy(int, unsigned long*, v8::internal::Isolate*) [/usr/local/bin/node]
23: 0x100a7a279 Builtins_CEntry_Return1_DontSaveFPRegs_ArgvOnStack_NoBuiltinExit [/usr/local/bin/node]
Abort trap: 6
{
"header": {
"reportVersion": 1,
"event": "Allocation failed - JavaScript heap out of memory",
"trigger": "FatalError",
"filename": "report.20210419.133903.2459.0.001.json",
"dumpEventTime": "2021-04-19T13:39:03Z",
"dumpEventTimeStamp": "1618857543059",
"processId": 2459,
"cwd": "/Users/tim/Documents/Dev/Work/...",
"commandLine": [
"/usr/local/bin/node",
"/Users/tim/Documents/Dev/Work/.../node_modules/.bin/ts-node",
"--project",
"tsconfig.json",
"src/index.uploadPhotos.ts"
],
"nodejsVersion": "v12.16.1",
"wordSize": 64,
"arch": "x64",
"platform": "darwin",
"componentVersions": {
"node": "12.16.1",
"v8": "7.8.279.23-node.31",
"uv": "1.34.0",
"zlib": "1.2.11",
"brotli": "1.0.7",
"ares": "1.15.0",
"modules": "72",
"nghttp2": "1.40.0",
"napi": "5",
"llhttp": "2.0.4",
"http_parser": "2.9.3",
"openssl": "1.1.1d",
"cldr": "35.1",
"icu": "64.2",
"tz": "2019c",
"unicode": "12.1"
},
"release": {
"name": "node",
"lts": "Erbium",
"headersUrl": "https://nodejs.org/download/release/v12.16.1/node-v12.16.1-headers.tar.gz",
"sourceUrl": "https://nodejs.org/download/release/v12.16.1/node-v12.16.1.tar.gz"
},
"osName": "Darwin",
"osRelease": "19.6.0",
"osVersion": "Darwin Kernel Version 19.6.0: Tue Jan 12 22:13:05 PST 2021; root:xnu-6153.141.16~1/RELEASE_X86_64",
"osMachine": "x86_64",
"cpus": [
{
"model": "Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz",
"speed": 2400,
"user": 72166760,
"nice": 0,
"sys": 43201420,
"idle": 325800720,
"irq": 0
},
{
"model": "Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz",
"speed": 2400,
"user": 510370,
"nice": 0,
"sys": 2195270,
"idle": 438462800,
"irq": 0
},
{
"model": "Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz",
"speed": 2400,
"user": 60524190,
"nice": 0,
"sys": 34105530,
"idle": 346538720,
"irq": 0
},
{
"model": "Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz",
"speed": 2400,
"user": 459350,
"nice": 0,
"sys": 1837550,
"idle": 438871540,
"irq": 0
},
{
"model": "Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz",
"speed": 2400,
"user": 46639130,
"nice": 0,
"sys": 26483090,
"idle": 368046220,
"irq": 0
},
{
"model": "Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz",
"speed": 2400,
"user": 459680,
"nice": 0,
"sys": 1553900,
"idle": 439154850,
"irq": 0
},
{
"model": "Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz",
"speed": 2400,
"user": 36915270,
"nice": 0,
"sys": 20782150,
"idle": 383471020,
"irq": 0
},
{
"model": "Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz",
"speed": 2400,
"user": 452340,
"nice": 0,
"sys": 1244610,
"idle": 439471480,
"irq": 0
},
{
"model": "Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz",
"speed": 2400,
"user": 30522780,
"nice": 0,
"sys": 16192850,
"idle": 394452810,
"irq": 0
},
{
"model": "Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz",
"speed": 2400,
"user": 451260,
"nice": 0,
"sys": 994900,
"idle": 439722270,
"irq": 0
},
{
"model": "Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz",
"speed": 2400,
"user": 22857570,
"nice": 0,
"sys": 11656040,
"idle": 406654830,
"irq": 0
},
{
"model": "Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz",
"speed": 2400,
"user": 451000,
"nice": 0,
"sys": 785240,
"idle": 439932190,
"irq": 0
},
{
"model": "Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz",
"speed": 2400,
"user": 18686860,
"nice": 0,
"sys": 8731110,
"idle": 413750470,
"irq": 0
},
{
"model": "Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz",
"speed": 2400,
"user": 444510,
"nice": 0,
"sys": 586450,
"idle": 440137480,
"irq": 0
},
{
"model": "Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz",
"speed": 2400,
"user": 14943810,
"nice": 0,
"sys": 6471190,
"idle": 419753440,
"irq": 0
},
{
"model": "Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz",
"speed": 2400,
"user": 439160,
"nice": 0,
"sys": 451740,
"idle": 440277530,
"irq": 0
}
],
"networkInterfaces": [
"redacted"
],
"host": "MacBook-Pro.local"
},
"javascriptStack": {
"message": "No stack.",
"stack": [
"Unavailable."
]
},
"nativeStack": [
{
"pc": "0x000000010015c5c2",
"symbol": "report::TriggerNodeReport(v8::Isolate*, node::Environment*, char const*, char const*, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, v8::Local<v8::String>) [/usr/local/bin/node]"
},
{
"pc": "0x0000000100080dc8",
"symbol": "node::OnFatalError(char const*, char const*) [/usr/local/bin/node]"
},
{
"pc": "0x0000000100185167",
"symbol": "v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [/usr/local/bin/node]"
},
{
"pc": "0x0000000100185103",
"symbol": "v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/usr/local/bin/node]"
},
{
"pc": "0x000000010030b2f5",
"symbol": "v8::internal::Heap::FatalProcessOutOfMemory(char const*) [/usr/local/bin/node]"
},
{
"pc": "0x000000010030c9c4",
"symbol": "v8::internal::Heap::RecomputeLimits(v8::internal::GarbageCollector) [/usr/local/bin/node]"
},
{
"pc": "0x0000000100309837",
"symbol": "v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [/usr/local/bin/node]"
},
{
"pc": "0x00000001003077fd",
"symbol": "v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/usr/local/bin/node]"
},
{
"pc": "0x0000000100312fba",
"symbol": "v8::internal::Heap::AllocateRawWithLightRetry(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/usr/local/bin/node]"
},
{
"pc": "0x0000000100313041",
"symbol": "v8::internal::Heap::AllocateRawWithRetryOrFail(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/usr/local/bin/node]"
},
{
"pc": "0x00000001002e035b",
"symbol": "v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationType, v8::internal::AllocationOrigin) [/usr/local/bin/node]"
},
{
"pc": "0x0000000100618718",
"symbol": "v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long*, v8::internal::Isolate*) [/usr/local/bin/node]"
},
{
"pc": "0x0000000100950919",
"symbol": "Builtins_CEntry_Return1_DontSaveFPRegs_ArgvOnStack_NoBuiltinExit [/usr/local/bin/node]"
},
{
"pc": "0x00000001009b9aea",
"symbol": "Builtins_CreateObjectLiteralHandler [/usr/local/bin/node]"
}
],
"javascriptHeap": {
"totalMemory": 2157436928,
"totalCommittedMemory": 2154055608,
"usedMemory": 2141200680,
"availableMemory": 44601736,
"memoryLimit": 2197815296,
"heapSpaces": {
"read_only_space": {
"memorySize": 262144,
"committedMemory": 33088,
"capacity": 32808,
"used": 32808,
"available": 0
},
"new_space": {
"memorySize": 4194304,
"committedMemory": 1201760,
"capacity": 2094912,
"used": 210024,
"available": 1884888
},
"old_space": {
"memorySize": 186793984,
"committedMemory": 186790248,
"capacity": 185404368,
"used": 185160800,
"available": 243568
},
"code_space": {
"memorySize": 3313664,
"committedMemory": 3158528,
"capacity": 2798464,
"used": 2798464,
"available": 0
},
"map_space": {
"memorySize": 8654848,
"committedMemory": 8654000,
"capacity": 4002800,
"used": 4002800,
"available": 0
},
"large_object_space": {
"memorySize": 1953595392,
"committedMemory": 1953595392,
"capacity": 1948451240,
"used": 1948451240,
"available": 0
},
"code_large_object_space": {
"memorySize": 622592,
"committedMemory": 622592,
"capacity": 544544,
"used": 544544,
"available": 0
},
"new_large_object_space": {
"memorySize": 0,
"committedMemory": 0,
"capacity": 2094912,
"used": 0,
"available": 2094912
}
}
},
"resourceUsage": {
"userCpuSeconds": 1005.59,
"kernelCpuSeconds": 73.2029,
"cpuConsumptionPercent": 7.78297,
"maxRss": 1440059752448,
"pageFaults": {
"IORequired": 5,
"IONotRequired": 12855015
},
"fsActivity": {
"reads": 0,
"writes": 0
}
},
"libuv": [
],
"environmentVariables": {
"redacted":1
},
"userLimits": {
"core_file_size_blocks": {
"soft": 0,
"hard": "unlimited"
},
"data_seg_size_kbytes": {
"soft": "unlimited",
"hard": "unlimited"
},
"file_size_blocks": {
"soft": "unlimited",
"hard": "unlimited"
},
"max_locked_memory_bytes": {
"soft": "unlimited",
"hard": "unlimited"
},
"max_memory_size_kbytes": {
"soft": "unlimited",
"hard": "unlimited"
},
"open_files": {
"soft": 49152,
"hard": "unlimited"
},
"stack_size_bytes": {
"soft": 8388608,
"hard": 67104768
},
"cpu_time_seconds": {
"soft": "unlimited",
"hard": "unlimited"
},
"max_user_processes": {
"soft": 5568,
"hard": 5568
},
"virtual_memory_kbytes": {
"soft": "unlimited",
"hard": "unlimited"
}
},
"sharedObjects": [
"redacted"
]
}
After some extensive memory profiling, I've noticed that the postman-sandbox
has a cacher
that grows to be quite large by the time the app crashes.
I'm seeing this as well. My use case is to use a script that requires newman as a library so the script can write a list of failures to disk. The script is run in a node-16-alpine docker container where the script iterates over a list of ids to make a POST request on a single API request. The script hits this error at approximately iteration 45000. When I bumped the NODE_OPTIONS=--max-old-space-size=8192
it hit it at ~86000.
Added reproducer: https://github.com/postmanlabs/newman/pull/2993. This script implements the newman library as above making requests to the postman echo API endpoint to reproduce this condition
We also have a problem in Postman collection of 2500 checks for various endpoints. It runs smoothly over postman it self, but in newman it crashes at 2083. check
Tried bumping node heap limit from 2000Mb to node heap limit = 8244 Mb, doesnt help much, crashed a bit later only with htop command I saw that memory consumption was 3.5 GB from newman
<--- Last few GCs --->
[1491214:0x623fa70] 693958 ms: Mark-sweep 2024.8 (2082.7) -> 2009.2 (2083.2) MB, 1201.5 / 0.0 ms (average mu = 0.194, current mu = 0.170) allocation failure scavenge might not succeed [1491214:0x623fa70] 695304 ms: Mark-sweep 2025.4 (2083.2) -> 2009.5 (2083.4) MB, 1259.0 / 0.0 ms (average mu = 0.133, current mu = 0.064) allocation failure scavenge might not succeed
<--- JS stacktrace --->
newman run Web_API_Collection.postman_collection.json -e integration_environment.json -k --reporters cli,json --reporter-json-export test.json
EDIT: I see that other applications based on node has similar problem with Node LTS 16.11+ https://github.com/facebook/jest/issues/11956
I think it's a problem with the https://github.com/DannyDainton/newman-reporter-htmlextra library.
I noticed in the original post the usage of --reporter-htmlextra-showOnlyFails
This would mean that if you have a lot of data coming back on failed assertions, it will still use a lot of memory to be able to generate the report.
You might want to use --reporter-htmlextra-omitResponseBodies
to keep your report to a minimum.
For me it was failing before and when I omit the response data (each response had 3.5mb for 100 iterations), it worked. Of course if you do a million iterations, you would always hit the memory allocation so it's just a workaround...
Hi,
I have an issue running my collection from command line using newman. I have 1 data file with around 4500 tests inside. Our service has over 200 endpoints. My collection has only 1 request and the inputs are coming from the data file. eg. uri and request body and what needs to verify eg. responseCode 1234, 1235 etc (depends on the test)
Around 1700+ iteration, there is an error shown below:
<--- Last few GCs --->
[31584:000001F343118A00] 243285 ms: Mark-sweep 2045.4 (2079.5) -> 2044.8 (2083.0) MB, 107.6 / 0.0 ms (average mu = 0.159, current mu = 0.002) allocation failure scavenge might not succeed [31584:000001F343118A00] 243350 ms: Mark-sweep 2046.4 (2083.6) -> 2045.3 (2076.3) MB, 9.2 / 0.0 ms (+ 24.8 ms in 21 steps since start of marking, biggest step 15.2 ms, walltime since start of marking 65 ms) (average mu = 0.282, current mu = 0.480) fina
<--- JS stacktrace --->
==== JS stack trace =========================================
Security context: 0x01a6944408d1
0: builtin exit frame: stringify(aka stringify)(this=0x03cadcec04b1 ,0x03cadcec04b1 ,0x01068ef43cf1 <JSFunction replace (sfi = 0000004575BECA99)>,0x0210ca980119 <Very long string[290790]>,0x03cadcec04b1 )
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory 1: 00007FF78D4D035F napi_wrap+119615 2: 00007FF78D477666 v8::internal::OrderedHashTable<v8::internal::OrderedHashSet,1>::NextTableOffset+38102 3: 00007FF78D478466 v8::internal::OrderedHashTable<v8::internal::OrderedHashSet,1>::NextTableOffset+41686 4: 00007FF78DCB451E v8::Isolate::ReportExternalAllocationLimitReached+94 5: 00007FF78DC9C6C1 v8::SharedArrayBuffer::Externalize+833 6: 00007FF78DB4DE9C v8::internal::Heap::EphemeronKeyWriteBarrierFromCode+1436 7: 00007FF78DB590E0 v8::internal::Heap::ProtectUnprotectedMemoryChunks+1312 8: 00007FF78DB55BF4 v8::internal::Heap::PageFlagsAreConsistent+3204 9: 00007FF78DB4B3F3 v8::internal::Heap::CollectGarbage+1283 10: 00007FF78DB49A64 v8::internal::Heap::AddRetainedMap+2500 11: 00007FF78DB716EB v8::internal::Factory::NewRawOneByteString+91 12: 00007FF78D85995C v8::internal::StringStream::ClearMentionedObjectCache+9564 13: 00007FF78DA8422E v8::internal::TimedHistogram::Stop+1374 14: 00007FF78DA87B9B v8::internal::TimedHistogram::Stop+16075 15: 00007FF78DA8AB60 v8::internal::TimedHistogram::Stop+28304 16: 00007FF78DC2DEA4 v8::internal::Builtins::builtin_handle+84900 17: 00007FF78E109A3D v8::internal::SetupIsolateDelegate::SetupHeap+547181 18: 00000031EB4CBAD4