Open jsdw opened 8 months ago
More detail on how I'd look to handle this offhand (all in follow_stream_driver.rs):
max_queued_messages: usize
to SharedState
SubscriberDetails.items
be eg enum QueuedMessages { Overflowed, Messages(VecDeque) }
. If we try to push more messages than max_queued_messages
then set it to Overflowed
.impl<Hash: BlockHash> Stream for FollowStreamDriverSubscription<Hash>
's Item be a Result
so we can return an error.
QueuedMessages::Overflowed
and mark as done.max_queued_messages
prop at same time as asking for items, and if local_messages.len() + new_iems.len() > max_queued_messages
then mark as done and return same error as above.Because we return a Result now, we'll need to modify the UnstableBackend
impl to accomodate this (hopefully should be straightforward!).
Hi @jsdw, I’m a first-time contributor and interested in getting started with the project. I’ve reviewed the issue and would like to take it on. Could you please assign it to me?
This is an issue raised by the auditors.
Simply put, if a user is using the
UnstableBackend
, then there areBackend
calls which createFollowStreamDriverSubscription
's. These subscriptions contain a queue of all of the un-consumed events received from the chain. When theFollowStreamDriver
is polled (which would often handle in the background), it will continue to receive events from the backend and add them to the queues for any active subscriptions. So, if these subscriptions aren't polled, they will store an ever-growing list of events waiting to be consumed through polling.The user is expected to poll
FollowStreamDriver
(actuallyUnstableBackendDriver
, which is the thing they get back when creating anUnstableBackend
and contains it) more slowly if they are struggling to keep up, which would enforce backpressure and slow down the rate at which events are obtained from the chain.To help bound memory usage a little better, we could also consider adding a configuration option to
UnstableBackendBuilder
likefn max_event_buffer_per_subscription(self, size: usize) -> Self
to bound the number of events that can be queued up on any given subscription before it's shut down and cleaned up. We could also consider setting an arbitrary default, like 1024 events, to give breathing room but prevent unlimited growth when nothing is being polled except theUnstableBackendDriver