Open aarshkshah1992 opened 5 months ago
cc @Stebalien @rvagg @raulk
Some initial thoughts:
EventIndex == nil
branches in the code. Right now, you can run without the events db but still subscribe to both the eth filter APIs and SubscribeActorEventsRaw
as long as you don't ask for anything historical in your filter. I don't know if anyone uses this, I did see @ribasushi was trying it out a couple of months ago. But it is theoretically a nice option to not have more filesystem cruft if you just want latest info. Do we rule that out as an option? Or re-architect this flow such that it still does its thing through one place but has the option to not do it through an sqlite db?Oh, and further to that last point, we really do need GC on this thing if we're going to make it a necessary add-on. Maybe then it becomes much less an issue for people to turn on. If it GCs in time with splitstore and you only ever have less than a week's worth of events then there's less to be concerned about. I have a 33G events.db that's been collecting since nv22. Those who have been running it since FEVM must have much larger databases.
@rvagg
EventIndex == nil
part, I'd be surprised if there's a valid use case for anything other than the Subscribe
API in that case. Again, can talk to users to figure this out but we can support that easily by giving the filters a Publisher
and that Publisher
can be the Index DB if enabled or the ChainNotify
stream if disabled.@rvagg Any thoughts on how to implement those periodic consistency checks ?. In my mind, can be as simple as
"My head is now epoch E -> so E - 900 is now final -> fetch the messages for it from the state store -> match it with what we have in the event Index -> raise alarm if mismatch". This can be a go-routine in the indexer itself.
GetLogs
call.@Stebalien
"I want this to work with the native APIs with minimal hackiness, so I'm not a fan of "interposing" the events sub-system between subscription to, e.g., chain notify events and the client"
Wdym here ? Please can you elaborate a bit ? Which hackiness are you referring to ? I am saying that the native ETH RPC Event APIs should subscribe to Index DB stream to listen in for updates and forward them to the client (querying the DB if needed).
IMO, the events subsystem still needs some way to "block" on a GetLogs call.
With this design, this is just a matter of seeing an update event from the Index DB whose height is greater than the maxHeight
requested by the GetLogs
call.
Checking at head-900 is what I was thinking, make big noise in the logs if there's a consistency problem. It's the kind of thing we could even remove in the future if it helps us build confidence.
But it is theoretically a nice option to not have more filesystem cruft if you just want latest info. Do we rule that out as an option? Or re-architect this flow such that it still does its thing through one place but has the option to not do it through an sqlite db?
We would use both options in production if they were available. We often times use two completely different code bases to track the head of a chain vs return historical results. We then use reverse proxies and some chain aware logic to route the requests. Similarly, supporting offline historical block processing is super useful.
Dropping in some links of relevant docs so I have somewhere to record this:
Mostly interesting for the references to backfilling because I think the way we currently do it is a bit broken. lotus-shed backfills directly into the sqlite db file that is being used read/write by the node which lotus-shed has to get message data from to do the backfills. We need some way of ensuring a single writer to the db, either by building in backfilling into lotus itself; or having some switch you can flick to make this work; or documenting a process (turn indexing off while you do it, turn it on when you're done [which isn't even a great answer if you think thorough the racy nature of that]).
Thoughts on what to do on startup to auto-backfill missing events:
MaxBackfillEpochsCheck
defaulting to 900
but could be any number if you want to make it go right back.Apply
? i don't think there's a reason to block reads for this is there?)ChainGetPath
to figure out the tipsets between) and process all of those tipsets, reverting as you walk back to the current canonical chain and applying as you move forward to the head.
a. One complication here is that if the distance between the two tipsets is really large it could block for a long time, is that OK?
b. Do we even need a write-lock? New writes from the chain should be idempotent I think, unless there are new reorgs that happen while we are trying to reconcile.
c. Perhaps we move the chain observer until after this is all done, so we don't even need a write lock but only start the observer when we are ready for it.
e. What do we need in the way of read locks? IsHeightPast
, IsTipsetProcessed
, GetMaxHeightInIndex
are all a concern here and maybe we need them to return something different during initial reconciliation? Could just say no for both of the Is*
calls and GetMaxHeightInIndex
could return the height-1 of the oldest tipset in ChainGetPath
?.
f. We could punt on this and just accept correctness concerns are part of lotus startup and then fix this at a later date so as not to complicate initial work.MaxBackfillEpochsCheck
, filling in any holes we find checking whether we processed that tipset or not (in the events_seen
table).Some judicious logging would be good for this too, "took 15 minutes to check 100,00 (MaxBackfillEpochsCheck) epochs for events auto-backfill, backfilled 0 of them; consider adjusting MaxBackfillEpochsCheck to a smaller number." (e.g. if the user set it large and we are doing useless checking on each startup because they got it done the first time).
An alternative approach to achieve "when you first do this, go back a loooong way, but on subsequent startups only go back to finality" might be to have both a MaxEpochsCheck
(small, default to finality) and MaxInitialBackfillEpochs
(defaults to finality too but could be very large). Then when we first open the db, if it doesn't exist, we use MaxInitialBackfillEpochs
to walk back. That way, you have a simple workflow: rm sqlite/events.db
if you want to start from scratch and use MaxInitialBackfillEpochs
to determine how far.
Also to watch out for: using splitstore you're going to run into the problem of not having state at some epoch, we have to handle the case where the Max*
you asked for isn't possible to fill. This should probably just result in a warning log and not be fatal.
Checklist
Ideas
.Lotus component
What is the motivation behind this feature request? Is your feature request related to a problem? Please describe.
The current Chain Notify <> Event Index <> Event Filter Management <> ETH RPC Events API is racy, causes missed events, is hard to reason about and has known problems such as lack of automated backfilling, returning empty events for tipsets on the cannonical chain even though the tipset has events, not using the event Index as the source of truth etc. This issue aims to propose a new architecture/flow for event indexing and filtering to fix all of the above.
Describe the solution you'd like
EthGetLogs
,EthGetFilterChanges
andEthGetFilterLogs
anyways, I'd posit that we can really simplify and streamline things if these along with theEthSubscribe
API use the event Index as the source of truthEthGetFilterLogs
andEthGetFilterChanges
APIs that are supposed to return the events for a given filter since the last time it was polled. TheEthSubscribe
andEthGetLogs
APIs don't even need these buffersEthGetFilterLogs
andEthGetFilterChanges
APIs, the way it works is that when the filter is first created, we "prefill" it with matching events from the Index DB and then rely on the buffer getting updated with events from tipset updates sent byChainNotify
. Every time the client polls these APIs, we return what we have in the buffer -> empty out the buffer -> start again (again relying solely on tipset updates -> the event index is no longer in the picture)I'd suggest the following refactor to fix all of the above and make this code easy to reason about
The Event Index DB becomes the source of truth (effectively addressing https://github.com/filecoin-project/lotus/issues/11830 )
Tipset updates (applies and reverts) coming from
ChainNotify
get written to a channel that the Event Index consumes from and applies event updates to the Index DB linearly one at a time(to not race between applies and reverts) -> there is no dual write to the filter buffersEvery update applied to the Event Index DB has a monotonically increasing ID
For a tipset with no events, the Event Index is updated with an empty event entry field ("I've seen this tipset but it has no events")
The Event Index allows subscription to an event stream containing the updates it makes to the DB
On subscribing to this stream("index stream"), a client gets the latest update made to the Index DB immediately and from there on, every subsequent Index update is published to the subscriber
Now here's how all the ETH RPC APIs work:
EthSubscribe
-> subscribes to the "index stream" -> forwards these updates to the RPC client's channelEthGetLogs
-> subscribes to the "index stream" -> keeps consuming till it sees an update containing the maxheight requested by the user -> queries the DB for events matching the client's filter -> sends out the resultsEthGetFilterLogs
andEthGetFilterChanges
When the Index boots up, it looks at it's own "highest non reverted tipset" -> looks at the current head of the chain as per ChanNotify -> fills in the all the missing updates in between ("automated backfilling") and only then processes tipset updates and "index stream" subscriptions.