zemse / hardhat-tracer

🕵️ allows you to see internal calls, events and storage operations in the console
MIT License
351 stars 36 forks source link

debug transaction runs out of memory when call stack too long #12

Closed joshpwrk closed 1 year ago

joshpwrk commented 2 years ago

Love your plug-in - we are heavily using it for debugging!

When running transactions on both a local network and during hardhat tests, the plug-in seems to fail with the below 2 errors when running with external contracts (not included in hre.artifacts). This was not the case in the 1.0.0-alpha.6 release.

Anyway we can include an easy way to manually add artifacts from custom directories/packages (assuming this is the root cause)? My hunch is that this is occurring only when the hre.artifacts does not have abis for the contract log that is being traced.

joshpwrk commented 2 years ago

Error 1: Trace is able to find the correct transaction but for some reason errors out when this line is reached:

const tracePromise = hre.network.provider.send("eth_getTransactionReceipt", [args.hash]);

Error:

<--- Last few GCs --->

[34873:0x7ff583600000]    26382 ms: Scavenge 4035.6 (4116.1) -> 4033.7 (4116.3) MB, 6.8 / 0.0 ms  (average mu = 0.640, current mu = 0.479) allocation failure 
[34873:0x7ff583600000]    26409 ms: Scavenge 4036.5 (4116.3) -> 4035.2 (4136.8) MB, 24.0 / 0.0 ms  (average mu = 0.640, current mu = 0.479) allocation failure 
[34873:0x7ff583600000]    30338 ms: Mark-sweep 4048.9 (4136.8) -> 4044.0 (4148.1) MB, 3914.3 / 0.0 ms  (average mu = 0.328, current mu = 0.024) allocation failure scavenge might not succeed

<--- JS stacktrace --->

FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
 1: 0x107bc8775 node::Abort() [/usr/local/bin/node]
 2: 0x107bc88f8 node::OnFatalError(char const*, char const*) [/usr/local/bin/node]
 3: 0x107d40707 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [/usr/local/bin/node]
 4: 0x107d406a3 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/usr/local/bin/node]
 5: 0x107edf0e5 v8::internal::Heap::FatalProcessOutOfMemory(char const*) [/usr/local/bin/node]
 6: 0x107edd94c v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/usr/local/bin/node]
 7: 0x107eea2c0 v8::internal::Heap::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/usr/local/bin/node]
 8: 0x107eea341 v8::internal::Heap::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/usr/local/bin/node]
 9: 0x107eb71b7 v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationType, v8::internal::AllocationOrigin) [/usr/local/bin/node]
10: 0x1082652ae v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long*, v8::internal::Isolate*) [/usr/local/bin/node]
11: 0x108607359 Builtins_CEntry_Return1_DontSaveFPRegs_ArgvOnStack_NoBuiltinExit [/usr/local/bin/node]
joshpwrk commented 2 years ago

Error 2: same conditions as above, but sometimes this error pops up sporadically instead of Error 1:

You are forking from block 1588, which has less than 31 confirmations, and will affect Hardhat Network's performance.
Please use block number 1558 or wait for the block to get 30 more confirmations.
Switched mainnet fork to block 1588

Error: read ECONNRESET
    at TCP.onStreamRead (node:internal/stream_base_commons:217:20) {
  errno: -54,
  code: 'ECONNRESET',
  syscall: 'read'
}
zemse commented 2 years ago

Error 1 is due to fact that huge transactions generate insanely huge debug traces, and it is possible enough memory doesn't exist on the laptop/computer system. I've opened a PR in an attempt to fix this problem https://github.com/NomicFoundation/hardhat/pull/2545. In the meantime, you may either roll back to 1.0.0-alpha.6 for testing purposes or if you want to make this new version work, you can provide more memory to hardhat: node --max-old-space-size=8192 ./node_modules/hardhat/internal/cli/cli.js test (here 8GB is provided, default is like 4GB, you may need to provide even more if the transaction is gigantic enough).

For Error 2, are you passing a local hardhat node URL in --rpc? Also, I think ECONNRESET comes when the node not available, the request cannot reach the URL.

If your external contracts are not in hre.artifacts then the plugin won't be able to parse it, it would just display raw data (in 1.0.0-alpha.6 had a bug that it doesn't display events it doesn't know). Maybe hardhat-dependency-compiler might help with getting the dependencies in hre.artifacts, but it needs .sol files, if some package has .json files then I don't see a nice way yet to include that in hre.artifacts. Wil look into this.

joshpwrk commented 2 years ago

Will try your suggestion on Error 1, thank u ser.

For Error 2: In short, I'm building an npm package that can deploy a bunch of pre-compiled contracts locally so integrators can deploy our market without needing testnet.

So the order of operations is:

Tbh, even having raw data display would be helpful since maybe I could somehow inject the artifact manually in our npm package (or use hardhat-dependency-compiler as you suggested), but just need to find a way around Error 2.

Again, appreciate the awesome support!

joshpwrk commented 2 years ago

confirming increasing npm memory size helps, however like you mentioned - the debug trace is so large that even 8gb is not enough. Will voice some support for your PR.

3commascapital commented 2 years ago

i have seen this issue as well. it could explain this oddly large gas usage https://github.com/NomicFoundation/hardhat/issues/2672#issuecomment-1138182826 @joshpwrk when / if you ever get those transactions to not fail miserably, do you find that the gas usage is excessively high?

zemse commented 1 year ago

This issue is resolved in a rewrite I've been working on for a while. Please checkout the v2-beta branch.

npm i hardhat-tracer@beta
zemse commented 1 year ago

The v2 was released, which does not use debug_tt and hence does not have this out-of-memory issue. You can look at the release info.

(closing, but feel free to reopen if you face any problems).