Currently, DB backend producers can scale out the number of topics but not a single large topic with lots of partitions. To ensure order, only a single thread can send messages from a topic right now.
We should be changing this so that we save the partition of a message instead of a partition_key (using RubyKafka's Partitioner class to generate the partition). We should then add a configuration to allow locking particular topics (or all topics) by partition instead of just by topic. (For small topics this would be overkill.)
This will require a PR to Phobos to allow sending messages with a partition, which it currently does not support, as well as changing the save logic, the read logic, and the migration template.
Currently, DB backend producers can scale out the number of topics but not a single large topic with lots of partitions. To ensure order, only a single thread can send messages from a topic right now.
We should be changing this so that we save the
partition
of a message instead of apartition_key
(using RubyKafka'sPartitioner
class to generate the partition). We should then add a configuration to allow locking particular topics (or all topics) by partition instead of just by topic. (For small topics this would be overkill.)This will require a PR to Phobos to allow sending messages with a
partition
, which it currently does not support, as well as changing the save logic, the read logic, and the migration template.