Closed jmartisk closed 4 years ago
I'm not opposed to the idea of removing it -- it adds complexity to the programming model and people point to the inconsistency between how it's handled with annotations vs. non-annotation instantiation. Perhaps we could have some mechanism to be able to see if the same metric is used in more than one place without requiring that to be a programming model thing. Some ideas...
I like 2
A problem might be defining what it means "used in more than one place". How do you determine this? Theoretically we could inspect the stack trace when an update method is called, but it won't be very easy, you'd need to search through the stack trace to find the exact line where the metric is being updated from, and especially in environments with a lot of interceptors (CDI) and synthetic classes (Quarkus) it might be very difficult
Perhaps we could, when this suggested config flag is enabled, look at the stack where each metric is updated, determine the class/method/line at which the update is occurring at, and record that with the metricId. If at a future point a metric update occurs at a different class/method/line then we could print a stack trace or perhaps just the initial and current class/method/line to stderr.
This checking would be expensive, so would only make sense to do when testing/debugging.
in environments with a lot of interceptors (CDI) and synthetic classes (Quarkus) it might be very difficult
I think a human would be able to tell where in the stack the metric update is occurring...eg. might be n stack frames above the interceptor frame. we might need to experiment to see if this is even viable.
I realized that inspecting stack traces actually only makes sense for the programmatic approach, and for that we have been defaulting to reusable=true
anyway, with no special tooling in place to detect different places of usage, so I don't think we need to introduce anything new there, especially when removing the concept of reusability in general.
And for annotated metrics the situation is easier, detecting that multiple elements declare metric annotations that effectively map to the same metric is quite easy. Right now, it just leads to deployment failure, we could simply change that to logging a warning (in case that these warnings are turned on). That would be a reasonable solution to this issue, I think?!
+1 for removing it
I realized that inspecting stack traces actually only makes sense for the programmatic approach
can you please say a bit more to help me understand?
we could simply change that to logging a warning
I think that makes sense -- though given MP doesn't have a logging API, I'm not sure how we'd do that.
I realized that inspecting stack traces actually only makes sense for the programmatic approach
can you please say a bit more to help me understand?
I mean that for annotated metrics, you don't have any stack trace, there's just the annotated element, and in that case it's easy to detect that there are multiple annotated elements that declare the same metric. And for the programmatic approach, I think we have never cared about reusability, so nothing changes there - if an implementation decides to start detecting different usages of a metric based on analyzing stack traces, then they can do that, but there's no point in doing that on spec level.
we could simply change that to logging a warning
I think that makes sense -- though given MP doesn't have a logging API, I'm not sure how we'd do that.
That would be implementation specific and probably wouldn't be required at all.
ok, so I think the net of this is...
which I think means we are just talking about removing reusability as a concept without needing to add anything new to the spec, correct?
Yes, that's exactly what I have in mind. The only exception being gauges, these will still remain not reusable and it will be illegal to reference the same gauge in multiple annotations.
AFAIK the reason for introducing reusability was to prevent copy-paste errors with annotated classes/methods, but IMO it introduces more trouble than benefit. Especially because the
reusable
parameter needs to be part of the Metadata, so it is "leaked" to the programmatic API (when you build Metadata objects programmatically, you have the option to specify thereusable
attribute even though it's not really applicable). Some aspects are confusing and quite useless, for example what if the registry contains a reusable counter, and I retrieve it usingregistry.counter(metadata)
wheremetadata
containsreusable=false
? Perhaps we could just make all metrics reusable always, with the exception of gauges? Micrometer AFAIU doesn't have this concept either and everything is reusable.