smol-rs / async-broadcast

Async broadcast channels
Apache License 2.0
151 stars 26 forks source link

Feature request: unbounded channel #69

Closed ChocolateLoverRaj closed 1 week ago

ChocolateLoverRaj commented 1 week ago

There are currently no async unbounded broadcast channels for Rust, which means I have to use a bounded channel (which this crate and Tokio provide). But it would be great to have a channel which dynamically uses more memory as needed, instead of having to reserve a certain amount of memory initially.

zeenix commented 1 week ago

I've never seen an unbounded broadcast channe APl and i think the reason is that it's hard to implement and bad. In a broadcast channel, every receiver receives every message, that means we need to maintain a queue of all pending messages that have not yet been received by all receivers. The queue has to have a size cause otherwise it can easily cause OOM if any of them receivers were not listening (i--e polling) in a specific interval.

In any case, what's the exact use case (as in the need rather than want)?

Keep in mind that async-broadcast supports "overflow-mode" and inactive receivers, apart from the default behaviour of the sender awaiting on queue capacity when the queue is full.

ChocolateLoverRaj commented 1 week ago

In any case, what's the exact use case (as in the need rather than want)?

In my use case I have an async loop that is watching the state of a button on a microcontroller and broadcasting a message every time there is a change. Then each Web Socket handler receives the event and sends a message through the Web Socket. I won't need unlimited web sockets, but I'm not sure how many I'll need at most.

zeenix commented 1 week ago

In my use case I have an async loop that is watching the state of a button on a microcontroller and broadcasting a message every time there is a change. Then each Web Socket handler receives the event and sends a message through the Web Socket. I won't need unlimited web sockets, but I'm not sure how many I'll need at most.

From the description, I don't see why you need to know how many elements will get queued (unless there's something more to it than you didn't share). If you decide a too low capacity, the only issues you can have is sender having to wait too often if the receivers are not fast enough. Using a good async runtime should help.

Also you can monitor or any slowness in the pipeline and increase the capacity accordingly. Keep in mind that the capacity is dynamic so you can adjust it after channel creation as well.