Closed dblia closed 4 years ago
During rafka
shutdown sequence, we waited for consumers to close gracefully (using the manager wg
). This is now done in a non-blocking way and might result in non-committed offsets or other unexpected behavior.
Other that that, great work :) I think that practice has shown that having a single consumer per client is enough with the added benefit of a simpler overall design.
During
rafka
shutdown sequence, we waited for consumers to close gracefully (using the managerwg
). This is now done in a non-blocking way and might result in non-committed offsets or other unexpected behavior.
Indeed, the Server's shutdown flow has been modified while it shouldn't. I'll restore the original behavior shortly. Nice catch, thank you!
There's an non-handled case in server: Revisit shutdown flow
commit, which will be addresses shortly.
Squashed all commits and forced-push for a last look.
:rocket: :clap: :tada:
This PR implements the Redefine Rafka scope and Drop redundant functionality proposals of the Rafka Rethinking design doc. For more details on the topic, please refer to those references.
A lot of major changes are introduced by this PR which are detailed in the respective commit messages. The most important ones are that the
Client
struct becomes aware of the server'sContext
as well as the Rafka configuration file. SinceClient
becomes the authoritative handler ofConsumer
objects, those two extra fields are essential to properly register and de-register aConsumer
instance via theClient
object.Moreover, the
consumer
field is also added to theClient
, which is a reference to the registeredConsumer
object. Also, a new function is introduced inclient
module, namedregisterConsumer
which will be used to create a newConsumer
instance upon request. TheConsumer
struct is extended with a new field containing the parent's cancel context (akaClient
), which will allow us signal aConsumer
instance about the Client's termination; a task which was performed until now by theConsumerManager
handler.