Closed Kavindu-Dodan closed 1 year ago
Should we define the naming of the exposed metrics as part of this OFEP or would be this covered in the following up spec change?
Right now there's no OpenFeature spec section defining semantic conventions. So if we see the need, we will need to introduce this first (And also include span names that we already have).
Another option is to define and adhere to the names of the metrics through this OFEP and adhere to that.
I think we need an additional dimension that represents the configuration of the feature flag being evaluated. The terminology varies depending on the implementation, but many have concepts like project, workspace, namespace, or app. Within that collection, there tend to be environment-specific configurations (e.g. dev, hardening, production). Depending on the provider, you would want to split impressions based on those concepts. Perhaps we could call it a scope
, and it would be up to the provider to decide what the scope looks like. This currently isn't covered by the OTel spec but should be included if we can define what the property should be called.
I think we need an additional dimension that represents the configuration of the feature flag being evaluated. The terminology varies depending on the implementation, but many have concepts like project, workspace, namespace, or app. Within that collection, there tend to be environment-specific configurations (e.g. dev, hardening, production). Depending on the provider, you would want to split impressions based on those concepts. Perhaps we could call it a
scope
, and it would be up to the provider to decide what the scope looks like. This currently isn't covered by the OTel spec but should be included if we can define what the property should be called.
Isn't this already covered by Evaluation Context [1], which is available through hook context [2] ? Probably the intended identifier maps to the targeting key
[3] 🤔
If so, I think we can include this as a dimension of evaluationRequests
metric.
[1] - https://openfeature.dev/specification/sections/evaluation-context [2] - https://openfeature.dev/specification/sections/hooks/#41-hook-context [3] - https://openfeature.dev/specification/sections/evaluation-context/#requirement-311
Isn't this already covered by Evaluation Context [1], which is available through hook context [2] ? Probably the intended identifier maps to the
targeting key
[3] 🤔
Yes, capturing that information as flag metadata is possible but I'm wondering if we can figure out what we want to call that property.
Nice, I am having some metrics: https://github.com/Tapico/node-posthog-openfeature/blob/main/src/hooks/OpenTelemetryHook.ts
Works pretty well
Isn't this already covered by Evaluation Context [1], which is available through hook context [2] ? Probably the intended identifier maps to the
targeting key
[3] 🤔Yes, capturing that information as flag metadata is possible but I'm wondering if we can figure out what we want to call that property.
scope
is a good enough naming. We can also use something likeevaluation_context
or targeting_key
.
Anyway, we should consider about privacy into account when using context data for metric dimensions, as these identifiers may include personal information such as user emails.
Nice, I am having some metrics: https://github.com/Tapico/node-posthog-openfeature/blob/main/src/hooks/OpenTelemetryHook.ts
Works pretty well
That's nice :) If OFEP goes well, we will soon have official OpenFeature metrics hook for JS
Isn't this already covered by Evaluation Context [1], which is available through hook context [2] ? Probably the intended identifier maps to the
targeting key
[3] 🤔Yes, capturing that information as flag metadata is possible but I'm wondering if we can figure out what we want to call that property.
scope
is a good enough naming. We can also use something likeevaluation_context
or `targeting_key.Anyway, we should consider about privacy into account when using context data for metric dimensions, as these identifiers may include personal information such as user emails.
scope
is something I think we may want to consider bringing into our glossary. I think we should loosely define it as a "namespace" for flags. It could be used to logically segment environment, project, tenant, etc, since it's very possible to have flag name collisions between these.
If we think it's not too far out of scope (no pun indented) for this OFEP I'd like to agree on that here as well.
Isn't this already covered by Evaluation Context [1], which is available through hook context [2] ? Probably the intended identifier maps to the
targeting key
[3] 🤔Yes, capturing that information as flag metadata is possible but I'm wondering if we can figure out what we want to call that property.
scope
is a good enough naming. We can also use something likeevaluation_context
or `targeting_key. Anyway, we should consider about privacy into account when using context data for metric dimensions, as these identifiers may include personal information such as user emails.
scope
is something I think we may want to consider bringing into our glossary. I think we should loosely define it as a "namespace" for flags. It could be used to logically segment environment, project, tenant, etc, since it's very possible to have flag name collisions between these.If we think it's not too far out of scope (no pun indented) for this OFEP I'd like to agree on that here as well.
Agree with the explanation & +1 for scope
. And as suggested in our offline discussion, this could be pushed from the provider through flag metadata [1].
This dimension can be optional and if present will benefit drill-downs when analyzing metrics (ex:- project scoped flag evaluation dashboard)
[1] - https://github.com/open-feature/spec/blob/main/specification/types.md#flag-metadata
Sorry about the double comment, I'm not sure what happened. Just to add to the previous thought around naming, it may be worth looking at the OpenTelemetry Metric Semantic Convention to make sure this proposal could eventually make its way back to OTel.
@thisthat @beeme1mr I updated metrics names proposed through this OFEP. They now have feature_flag
prefix similar to sematic convention of feature flags [1] and meaningful suffixes.
Let me know your thoughts on this
[1] - https://opentelemetry.io/docs/specs/otel/logs/semantic_conventions/feature-flags/
@thisthat @beeme1mr I updated metrics names proposed through this OFEP. They now have
feature_flag
prefix similar to sematic convention of feature flags [1] and meaningful suffixes.
- feature_flag.evaluation_active_count
- feature_flag.evaluation_requests_total
- feature_flag.evaluation_success_total
- feature_flag.evaluation_error_total
Let me know your thoughts on this
[1] - https://opentelemetry.io/docs/specs/otel/logs/semantic_conventions/feature-flags/
This looks good to me.
I think we have sufficient consensus for this one. This is not a spec change, it just establishes some consistency around some of the OTel-related extensions/hooks. Merging.
This PR
Introduce an enhancement proposal to introduce OpenTelemetry metric hooks.
Consider checking PR [1] for a POC implementation in Go sdk.
Besides, see below Prometheus samples for metrics,
feature_flag.evaluation_requests_total
feature_flag.evaluation_success_total
feature_flag.evaluation_error_total
feature_flag.evaluation_requests_total
[1] - https://github.com/open-feature/go-sdk-contrib/pull/217