Closed kejace closed 6 years ago
So it seems to me like we have two ways to go about this: 1) Hope geth fixes this issue (which they likely will at some point), and that something similar doesn't come up again, or 2) we implement some sort of failure-tolerant abstraction either on top of or within ps-web3.
I'm personally inclined towards option 2, but then again I imagine ps-web3 would like to stay as close to web3.js as possible, in which case such failures should be handled by application logic (i.e., some layer on top of it). I'd propose creating some sort of fault-tolerant abstraction over the base web3 logic, perhaps something more conduit
Source
-like? I may be talking out of my ass here (haven't delved much into ps-web3), but it occurs to me at the same time that the interface to runWeb3
almost suggests that should have that role. I think we'd benefit greatly in terms adding some kind of checkpointing of blocks/events processed, round-robin'ing against Providers, etc. Think Apache Storm's reliable Spout
concept, just without all the distributed stream processing stuff.
We temporarily solved this issue by running a parity
node instead, which seems to work fine for now.
In addition, we identified an issue which was causing indexed events to not be parsed correctly.
Option 2. is now a question of combining https://github.com/blinky3713/purescript-monadic-streams and use standard(?) combinators. I created the issue here https://github.com/f-o-a-m/purescript-web3/issues/29 and will close this issue.
I'm gonna close this now that we have parity-proxy.foam
w/ full archive node
We know that certain bugs (https://github.com/ethereum/go-ethereum/issues/15243) can prevent us from parsing events successfully. We have demonstrated that running a node on a more powerful server can at the least temporarily mitigate the issue.
Considering that we will want to replay events from the main-net, we will have to figure out a robust solution to this. This solution will later serve us as we deploy on the main-net ourselves.