redis / lettuce

Advanced Java Redis client for thread-safe sync, async, and reactive usage. Supports Cluster, Sentinel, Pipelining, and codecs.
https://lettuce.io
MIT License
5.4k stars 975 forks source link

Expose request queue and end-to-end command latency metrics #1797

Open hktechn0 opened 3 years ago

hktechn0 commented 3 years ago

Feature Request

Lettuce already has command latency metrics, but it only tracks command latency on Redis node. https://lettuce.io/core/snapshot/reference/#command.latency.metrics.micrometer

Lettuce should expose more useful metrics about request queue and latency

Is your feature request related to a problem? Please describe

It seems slow command execution is caused by time elapsed in lettuce request queue on our environment. So I'm trying to observe actual Redis command latency using lettuce client. Command metrics is useful, but it doesn't show actual latency which observed on lettuce client caller. For example, we can't observe slow queued command execution on failover using command metrics.

Describe the solution you'd like

Expose request queue metrics using Micrometer

Describe alternatives you've considered

I was trying to implement wrapper for RedisCommands<K, V> to observe actual command metrics. However, lettuce have many types of RedisCommands interface such as async, reactive and cluster... I want native support for such metrics on lettuce client.

Teachability, Documentation, Adoption, Migration Strategy

mp911de commented 3 years ago

We start collecting metrics inside the I/O components (CommandHandler) to not spread latency code across the entire codebase. What you're asking is capturing the time where the command was actually created (that is calling a command method on the command interface). Since we only capture metrics on a per-command basis, introducing additional metrics brings a certain complexity.

Right now, I don't have the bandwidth for that kind of implementations, but feel free to explore the code and eventually come up with a pull request that we can discuss.