Open whoabuddy opened 2 years ago
Using single threaded service/process store all events in a single table. And then use another process or set of processes to read these events, extract data from them and create snapshots of data with structures that you need.
This approach is called event sourcing and allows you to re-process all events and rebuild new snapshots (aka. projections) at will.
@aulneau we talked about storing the events for chainhooks before sending them to be processed, did you have a template or high-level outline on how this could be done with Next.js?
Now that the bitcoin-faces-frontend repo is setup wtih Next.js we can revisit this either as part of the implementation.
receiver
: an API endpoint that accepts and stores chainhook events
events
: an API endpoint that returns specific events
queue
: an API endpoint that returns events by status
update-queue
: an API endpoint that allows modifying a stored chainhook event status
Types for chainhooks are defined in the clarinet repo and we'll want to align with those as much as possible to keep it simple.
Open to any input and feedback!
My preference is to lean on Cloudflare KV for backend storage but their hosting requires the use of runtime: "experimental-edge"
in the config and I'm not sure how that will work with our project.
This beta of Cloudflare Queues looks like something cool to test against this idea!
It supports defining workers as producers (store events) and consumers (process events), as well as connections between workers.
Chainhooks fire off a webhook that is interpreted by the handler.
Having durability at this layer is key, we don't want a missed event that results in an unprocessed transaction.
In addition, knowing what data to index will be important. Will port over more notes from the tech doc.