open-telemetry / opentelemetry-java

OpenTelemetry Java SDK
https://opentelemetry.io
Apache License 2.0
2.01k stars 834 forks source link

Optimisation and benchmarking of backing data structure for exponential histogram #3848

Closed jamesmoessis closed 1 year ago

jamesmoessis commented 3 years ago

For the exponential histogram, we decided to start with the simplest possible implementation for the backing data structure, and then beat it from there. Some context here: https://github.com/open-telemetry/opentelemetry-java/pull/3724#discussion_r726722725. This is the ticket to beat it.

This ticket is to track the work for the optimisation, testing, and benchmarking of these data structures. There are notable reference implementations such as NrSketch and DynaHist that the author may draw from.

To implement the backing data structure, the author should implement ExponentialCounter.

jamesmoessis commented 2 years ago

benchmarking PR: https://github.com/open-telemetry/opentelemetry-java/pull/3986

jsuereth commented 2 years ago

A few things I found when benchmarking and investigating the initial prototype:

jsuereth commented 2 years ago

Here's a quick comparison of using a Circular-Buffer (with preallocated array of integers) vs. MapCounter: https://github.com/open-telemetry/opentelemetry-java/compare/main...jsuereth:wip-exponential-counter-perf

It's a 2x performance differential. I think (in practice) the # of measurements per-histogram likely means we'll be better off pre-allocating a bounded array for buckets vs. using a Map in almost all cases. Going to add in benchmarks for the byte->short->int->long conversions after I have a chance to push some better shaped input data. (Ideally we'd use incoming measurements froma live server we've recorded somewhere, but for now just going to synthesize similarly shaped data with assumed distributions from what I see in histograms)

jamesmoessis commented 2 years ago

@jsuereth

Yeah, the MapCounter has unnecessary safeguards for thread safety. Since it's purely behind Handle some fo those can be taken away. Anyway, I doubt the most efficient solution uses a map anyway. As we can see from your bechmarks, an array seems more efficient. The map was just a baseline initial implementation that we knew would be inefficient.

I initially had used the variable sized counter + circular array that nrsketch had in the aggregator, but took it out due to review and to reduce the scope of the initial aggregation PR. I do have some doubts of whether the extra code to convert byte->short->int->long is worth it though. It's a CPU/memory trade off I guess.

jsuereth commented 2 years ago

320*8*2 = 5120 byes per histogram metric stream vs. 320*1*2 = 640 bytes. It's a big memory differential for histograms....

But yeah I sw the comments and understand the current state.

jamesmoessis commented 2 years ago

True, worst case the memory difference is quite high. It's even doubly worse than that since there's 320*postivebuckets and 320*negativebuckets. I imagine most user won't be using the maximum 320 buckets so the memory difference won't ordinarily be the worst case as you stated. But I do tend to agree that saving memory here is probably worth the CPU time to convert between types.

jack-berg commented 1 year ago

I've done a fair amount of work on this recently, resulting in additional benchmarks (#4989 and #4912) and a variety of enhancements (#4700, #5023, #5020, #5047).

I'm going to close this. If there are additional specific enhancements to be made, we can open new issues to discuss and track.