:warning: The Truffle Suite is being sunset. For information on ongoing support, migration options and FAQs, visit the Consensys blog. Thank you for all the support over the years.
I spent some time exploring @davidmurdoch 's comment here.
There are potentially a lot of awaits in this loop that could be parallelized. I suspect that parallelizing them will lead to a massive speed up when requesting blocks on a forked network. I think it is worth the effort to look into the performance possibilities here.
When a large range of blocks or high number of transactions are involved responses when forking can take a long time. Although we can process requests concurrently, fallback mode currently relies on a socket connection that is reasonably consistent.
A better way would be batching tx receipt requests or trying the http handler over the socket handler.
Also, this issue is currently affected by the two open socket issues, #3476 & #3477
I spent some time exploring @davidmurdoch 's comment here.
When a large range of blocks or high number of transactions are involved responses when forking can take a long time. Although we can process requests concurrently, fallback mode currently relies on a socket connection that is reasonably consistent.
A better way would be batching tx receipt requests or trying the http handler over the socket handler.
Also, this issue is currently affected by the two open socket issues, #3476 & #3477