feathersjs / feathers

The API and real-time application framework
https://feathersjs.com
MIT License
14.97k stars 744 forks source link

JavaScript heap out of memory When Patching multi records #3373

Open MarcGodard opened 6 months ago

MarcGodard commented 6 months ago

Many versions ago this worked without issues (it is part of a process we use every 3 months, however this is a new tool that worked when tested over a year ago now).

I am patching 1000s of records at once adding a flag.

I tried to go back in versions, but npm wouldn't let me install because of library conflicts, and I am not sure how to get going back to work.

multi is on... and the code on the client side is (query has the data range):

              await apiClient.service(apiService)
                  .patch(null, { invoiced: false }, { query })
                  .then(async () => {
                    specialActionProcessing = false
                    await handlePageLoad()
                  })
                  .catch(err => {
                    console.error(err)
                    specialActionProcessing = false
                    toasts.push('warning', err.message, t('common.toasts.errorTitle'))
                  })

I was just running test preparing for the year end and this started happening. I tried updating my node version, tried to increase the memory for note, I tried everything I can think of.

Here is the full error if anyone can get value from it.

<--- Last few GCs --->

[650356:0x6031840]   227923 ms: Mark-sweep 4067.0 (4126.0) -> 4057.6 (4131.1) MB, 71.5 / 0.0 ms  (average mu = 0.695, current mu = 0.671) allocation failure; scavenge might not succeed
[650356:0x6031840]   227996 ms: Mark-sweep 4074.1 (4147.1) -> 4058.6 (4136.0) MB, 61.1 / 0.0 ms  (average mu = 0.560, current mu = 0.160) allocation failure; scavenge might not succeed

<--- JS stacktrace --->

FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
 1: 0xb85bc0 node::Abort() [node]
 2: 0xa94834  [node]
 3: 0xd66d10 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [node]
 4: 0xd670b7 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [node]
 5: 0xf447c5  [node]
 6: 0xf56cad v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node]
 7: 0xf313ae v8::internal::HeapAllocator::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [node]
 8: 0xf32777 v8::internal::HeapAllocator::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [node]
 9: 0xf12cc0 v8::internal::Factory::AllocateRaw(int, v8::internal::AllocationType, v8::internal::AllocationAlignment) [node]
10: 0xf0a28c v8::internal::FactoryBase<v8::internal::Factory>::AllocateRawArray(int, v8::internal::AllocationType) [node]
11: 0xf0a405 v8::internal::FactoryBase<v8::internal::Factory>::NewFixedArrayWithFiller(v8::internal::Handle<v8::internal::Map>, int, v8::internal::Handle<v8::internal::Oddball>, v8::internal::AllocationType) [node]
12: 0x11c546e v8::internal::MaybeHandle<v8::internal::OrderedHashMap> v8::internal::OrderedHashTable<v8::internal::OrderedHashMap, 2>::Allocate<v8::internal::Isolate>(v8::internal::Isolate*, int, v8::internal::AllocationType) [node]
13: 0x11c5523 v8::internal::MaybeHandle<v8::internal::OrderedHashMap> v8::internal::OrderedHashTable<v8::internal::OrderedHashMap, 2>::Rehash<v8::internal::Isolate>(v8::internal::Isolate*, v8::internal::Handle<v8::internal::OrderedHashMap>, int) [node]
14: 0x12cf86d v8::internal::Runtime_MapGrow(int, unsigned long*, v8::internal::Isolate*) [node]
15: 0x1705b39  [node]
Aborted (core dumped)
marshallswain commented 6 months ago

Think you could debug this with the Chrome debugger and see if it will let you run the profiler tool? That might provide more information about the cause.

MarcGodard commented 6 months ago

Yeah, I will need to look at it. Will update the issue when I get there. I have very little experience with that, so it might take me a bit.

daffl commented 6 months ago

You should be able to increase the available heap size for Node globally by setting the following environment variable

export NODE_OPTIONS=--max_old_space_size=4096

The problem is probably that it's trying to return all records which is why there is https://github.com/feathersjs/feathers/pull/2945 which hasn't gotten merged so far.

MarcGodard commented 6 months ago

You should be able to increase the available heap size for Node globally by setting the following environment variable

export NODE_OPTIONS=--max_old_space_size=4096

The problem is probably that it's trying to return all records which is why there is #2945 which hasn't gotten merged so far.

This is very old node version stuff. All the new versions already max out at 4Gigs. Hopefully that fast thing gets merged, because it looks likely the issue.

I am wondering if I add $select: ['id'] to the bulk patch if it would return a lot less data... maybe even just return a smaller column. I will try that tomorrow.

AshotN commented 6 months ago

You can probably get away with creating a custom method that runs some raw SQL as temporary work around. Which is a pretty ugly solution

MarcGodard commented 6 months ago

You can probably get away with creating a custom method that runs some raw SQL as temporary work around. Which is a pretty ugly solution

I did an ugly temporary solution for now... but it is slow. Unfortunately I am unsure how to do the raw SQL way.

AshotN commented 6 months ago

It's not a great DX but this should work as a temporary hack

const res = (await app.service('user').getModel().raw('SELECT * FROM "user"')).rows

Assuming you are using Knex+PG

Relevant docs

https://knexjs.org/guide/raw.html https://node-postgres.com/features/queries