twmb / franz-go

franz-go contains a feature complete, pure Go library for interacting with Kafka from 0.8.0 through 3.7+. Producing, consuming, transacting, administrating, etc.
BSD 3-Clause "New" or "Revised" License
1.78k stars 182 forks source link

Cache metadata more #800

Open twmb opened 1 month ago

twmb commented 1 month ago

https://github.com/grafana/mimir/pull/8886#issuecomment-2265802337

@pracucci to reply to your latest message in the thread -- if we strengthen the caching within franz-go itself, then it addresses caching the metadata request you mention,

""" The pt1 of the PR description refers to the Metadata request issued to discover the partitions: https://github.com/twmb/franz-go/blob/6b61d17383b1e0c766143118efa39f1f320a3b2b/pkg/kadm/metadata.go#L432-L436 """

My proposal covers caching that^^, at which point the only extra caching your PR would provide is the 5 extra seconds (your PR uses 15s caching, but also elsewhere in Mimir you use 10s MetadataMinAge)

pracucci commented 1 month ago

From the original comment, you proposed 3 options:

  • Always use the mapped metadata cache for user issued Metadata requests
  • Introduce a new API, RequestCachedMetadata(req *kmsg.MetadataRequest, expiry time.Duration) (*kmsg.MetadataResponse, error). If the metadata exists and is cached for less than expiry, return it. If not, issue the metadata request (and force the response through the cache)
  • Introduce a new API, UseCache(context.Context) context.Context that can be used to query any internal cache when manually issuing a metadata request

My vote is the first or second bullet point. I think making these changes would avoid the need for this PR, but I'm not positive (not looking too closely).

As a user I would be prefer to be in control whether to use the cache and not. I guess option 1 and 2 wouldn't allow to have such control for an high-level request like ListOffsets() but, on the other side, I have no idea about option 3 complexity (which is also your least preferred one).