Open jajaislanina opened 3 days ago
Hey @jajaislanina
There a few configs you can play around to reduce overhead to minimum:
directiveDefaults
under project.networkDefaults.directiveDefaults to disable "retry" for pending txs or empty resultsdatabase.evmJsonRpcCache: ~
I'm happy to pair with you to investigate further and get it to lowest overhead possible (normally i don't expect more than 5ms overhead to be necessary when things are properly initialized) https://t.me/erpc_cloud DM me :)
Hi erpc team,
I have a setup where i am running 2 Full nodes and 2 Archive nodes on Base Mainnet. Due to hardware limitations i need to use Full nodes to get the latest data because they are faster to sync and less prone to lag induced by high TPS on the network. My main use case is quick discovery of new blocks and re-executing all Tx from block. This causes ~4k-5k RPS to my node and is latency sensitive as i need to fetch all states from node before the next block is minded (~2 seconds on Base).
My goal is to make the erpc as fast as possible but to be able to handle errors when my application falls out of the range of the full node (128 blocks).
What i'm seeing is latency issues when using erpc compared to going directly to the node. While some delay is expected i am seeing 5x and more. On the left graph we see total avg request duration for eth_getStorageAt and on the right avg duration of the requests for same method from erpc to the node. Left graph definition:
Right graph definition:
This is my erpc config
Any tips on how to tune the config to optimize low latency?