amtrack / force-dev-tool

[DEPRECATED] Command line tool supporting the Force.com development lifecycle
MIT License
108 stars 37 forks source link

JavaScript heap out of memory #118

Closed dieffrei closed 6 years ago

dieffrei commented 6 years ago

When I tried to retrieve my package.xml, throw these errors:

Retrieving from remote production to directory src

<--- Last few GCs --->

[16791:0x102803800] 720526 ms: Mark-sweep 204.1 (212.8) -> 199.2 (212.8) MB, 29.6 / 0.0 ms (+ 19.7 ms in 2 steps since start of marking, biggest step 14.4 ms, walltime since start of marking 796 ms) allocation failure GC in old space requested [16791:0x102803800] 723787 ms: Mark-sweep 439.2 (452.8) -> 439.1 (452.3) MB, 131.3 / 0.0 ms (+ 34.5 ms in 1 steps since start of marking, biggest step 34.5 ms, walltime since start of marking 3260 ms) allocation failure GC in old space requested

<--- JS stacktrace --->

==== JS stack trace =========================================

Security context: 0x331c3f125ec1 2: lookupValue(aka lookupValue) [/usr/local/lib/node_modules/force-dev-tool/node_modules/jsforce/lib/soap.js:135] [bytecode=0x331ce627ecd1 offset=26](this=0x331c48282311 ,obj=0x331cb3c02201 <Very long string[54089242]>,propRegExps=0x331c97fd26e9 <JSArray[2]>) 4: getResponseBody [/usr/local/lib/node_modules/force-dev-tool/node_modules/jsforce/lib/soap.js:123] [bytecode=0x331ce6...

FATAL ERROR: invalid table size Allocation failed - JavaScript heap out of memory 1: node::Abort() [/usr/local/bin/node] 2: node::FatalException(v8::Isolate, v8::Local, v8::Local) [/usr/local/bin/node] 3: v8::internal::V8::FatalProcessOutOfMemory(char const, bool) [/usr/local/bin/node] 4: v8::internal::OrderedHashTable<v8::internal::OrderedHashSet, 1>::Rehash(v8::internal::Handle, int) [/usr/local/bin/node] 5: v8::internal::OrderedHashSet::Add(v8::internal::Handle, v8::internal::Handle) [/usr/local/bin/node] 6: v8::internal::KeyAccumulator::AddKey(v8::internal::Handle, v8::internal::AddKeyConversion) [/usr/local/bin/node] 7: v8::internal::(anonymous namespace)::StringWrapperElementsAccessor<v8::internal::(anonymous namespace)::FastStringWrapperElementsAccessor, v8::internal::(anonymous namespace)::FastHoleyObjectElementsAccessor, v8::internal::(anonymous namespace)::ElementsKindTraits<(v8::internal::ElementsKind)9> >::CollectElementIndicesImpl(v8::internal::Handle, v8::internal::Handle, v8::internal::KeyAccumulator*) [/usr/local/bin/node] 8: v8::internal::KeyAccumulator::CollectOwnElementIndices(v8::internal::Handle, v8::internal::Handle) [/usr/local/bin/node] 9: v8::internal::KeyAccumulator::CollectOwnKeys(v8::internal::Handle, v8::internal::Handle) [/usr/local/bin/node] 10: v8::internal::KeyAccumulator::CollectKeys(v8::internal::Handle, v8::internal::Handle) [/usr/local/bin/node] 11: v8::internal::FastKeyAccumulator::GetKeys(v8::internal::GetKeysConversion) [/usr/local/bin/node] 12: v8::internal::(anonymous namespace)::Enumerate(v8::internal::Handle) [/usr/local/bin/node] 13: v8::internal::Runtime_ForInPrepare(int, v8::internal::Object*, v8::internal::Isolate) [/usr/local/bin/node] 14: 0x5b4f7b0e6a5 15: 0x5b4f7bfedcb zsh: abort force-dev-tool retrieve production force-dev-tool retrieve production 13.17s user 1.38s system 1% cpu 12:09.24 total

amtrack commented 6 years ago

I'm very sorry to see that.

How much memory do you have available?

My feeling is that for large orgs, 1 GB of memory is not enough. We have had many problems for example on Heroku with Free and Hobby dynos (512 MB RAM) Switching to a Hobby dyno (1GB RAM and more) seems to have resolved the memory issues for us.

Unfortunately i don't have a real fix for that. I already researched my code and could not find a memory leak. If you have any hints or ideas, please let me know.

Further i compared the memory usage with other tools like Ant Migration Tool, force-cli and so on. It looks to me that other tools consume a lot of memory as well.

As a workaround i would try to reduce the amount of Metadata to be retrieved (e.g. through .forceignore).

If this is not possible, i have implemented a script to split the retrieve into multiple retrieves. This does not only reduce the amount of metadata components per retrieve but also drastically reduces the lines in the PermissionSet components (see issue described here).

If you're interested in the script, let me know and i will share it with you.

okram999 commented 6 years ago

@dieffrei its sad to see people closing off, without any further inputs/reference for the issue they opened.

dieffrei commented 6 years ago

@okram999 @amtrack the problem I got is about the documents. We was working at time with a Large Code Base +3MI apex lines. To solve it, I just avoid the documents ;)

dieffrei commented 6 years ago

@okram999 an a strategy is divide in 2 packages your retrieve process.

okram999 commented 6 years ago

True, adding all the metadata isn’t really practical....