In theory, it should be possible have message handlers that only require &self to run in parallel. If an actor receives multiple messages in a row that only require &self, they could all be processed in parallel until a message requiring &mut self is received.
While this seems like a potentially nice optimization, there are also potential issues that would need to be addressed as well:
In practice, how often do message handlers not perform any mutations on the actor? It seems like the only case where that would come up are message handlers that only broadcast other messages. While this is not a completely uncommon case, it may not be a core enough use case to justify the complexity/overhead of the parallel processing.
Would adding this functionality introduce additional overhead for cases that don't use it? I imagine we would implement this using something like an RwLock, which would mean locking the actor even all of its message handlers take &mut self. If there's overhead here (and there almost certainly is), it either needs to be justified by the benefits of parallel processing or needs to be opt-out (or opt-in). Either way, we'll have to determine if parallel processing is the default behavior or is opt-in.
Does this limit our ability to support !Sync actors? I assume we can at least provide a way to differentiate between Sync and !Sync actors, and have a different stage type for !Sync actors that doesn't do parallel processing.
Would the parallel processing behavior be potentially confusing if users assume messages will always be processed in the order they're received? For example, in my Mahjong test project, the ClientController actor has a message handler for sending updates to the client. The initial implementation assumes that messages will be sent in the order they occur, but if they were processed in parallel they could potentially be sent out of order. Out-of-order processing isn't necessarily an issue, but user's need to be aware of it.
How do we manage the thread pool used for parallel processing? If we maintain our own, does it interfere with any thread pool used by the underlying runtime? And can we share the pool between all running actors? Ideally we should defer to the underlying runtime, but I'm not sure it's possible to
determine if the active runtime is a thread pool or single-threaded.
I think the thing to do here is build out a prototype of the functionality so that we can evaluate how well it works. I'm also going to keep an eye out for cases where this would be beneficial in practice so that we can have some more concrete use cases to drive discussion.
In theory, it should be possible have message handlers that only require
&self
to run in parallel. If an actor receives multiple messages in a row that only require&self
, they could all be processed in parallel until a message requiring&mut self
is received.While this seems like a potentially nice optimization, there are also potential issues that would need to be addressed as well:
RwLock
, which would mean locking the actor even all of its message handlers take&mut self
. If there's overhead here (and there almost certainly is), it either needs to be justified by the benefits of parallel processing or needs to be opt-out (or opt-in). Either way, we'll have to determine if parallel processing is the default behavior or is opt-in.!Sync
actors? I assume we can at least provide a way to differentiate betweenSync
and!Sync
actors, and have a different stage type for!Sync
actors that doesn't do parallel processing.ClientController
actor has a message handler for sending updates to the client. The initial implementation assumes that messages will be sent in the order they occur, but if they were processed in parallel they could potentially be sent out of order. Out-of-order processing isn't necessarily an issue, but user's need to be aware of it.I think the thing to do here is build out a prototype of the functionality so that we can evaluate how well it works. I'm also going to keep an eye out for cases where this would be beneficial in practice so that we can have some more concrete use cases to drive discussion.