dashbitco / broadway_kafka

A Broadway connector for Kafka
233 stars 53 forks source link

Add kafka lag query #147

Closed GregorGrasselli closed 4 months ago

GregorGrasselli commented 4 months ago

Proposal to resolve #145 . Not doing through telemetry because I could not see a good trigger for this. We would probably prefer calling it on a timer explicitly.

Not sure if extending the client behaviour for this is ideal though.

josevalim commented 4 months ago

Thank you @GregorGrasselli. Now that I think more about it, I wonder if it would be better to expose the client_id/group_id instead, via a function called fetch_client_and_group_ids? This way you can implement this functionality in any way. Overall, it is probably best if we avoid adding APIs that query Kafka directly from this project (since it is out of scope).

GregorGrasselli commented 4 months ago

We already have the group id, because we set it in the configs. So we would only need to expose the client id. The down side of this is that the client id is specific to brod, so it means that brod being a dependency would leak in the api.

josevalim commented 4 months ago

We could have fetch_client_id, which returns the client id type and its reference, so it would return {:brod, client_id}. But yes, in this case it means you start adding :brod specific logic to your app, but it also means you can use all of brod.

GregorGrasselli commented 4 months ago

OK, I'll make the PR

slashmili commented 4 months ago

I also think fetch_kafka_lag shouldn't be part of this project.

What we do is monitoring the lag via Prometheus(or via AWS) as our Kafka is exposing metrics.

The down side of this is that the client id is specific to brod

That's correct however that's the cost of asking for lags. BTW why do you want to reuse the same connection to ask for lags? I'd try to keep consumer connection only for consuming data and create a separate connecting for maintenance related tasks.

GregorGrasselli commented 4 months ago

Thank you guys. We will do it as @slashmili suggested.