Open kuujis opened 8 months ago
Sorry, I don't understand the issue, as you keep talking about unit tests. Please describe the use case, show configuration used, and explain what behavior you are expecting and getting instead.
Sorry, I'll try to elaborate.
I originally wanted to solve an issue in OpenTelemetry SDK - https://github.com/open-telemetry/opentelemetry-java/issues/5504, which used the following strategies_sampling.json:
{
"service_strategies": [
{
"service": "foobar",
"type": "ratelimiting",
"param": 2
}
],
"default_strategy": {
"type": "probabilistic",
"param": 0.2,
"operation_strategies": [
{
"operation": "/metrics",
"type": "probabilistic",
"param": 0.0
}
]
}
}
I made a small java service and added OTEL tooling with all-in-one Jaeger backend and local console output for simplicity sake - https://github.com/kuujis/opentelemetry-java-5504. There are commands to build and run images, which should make the problem clear(er?) if someone feels like checking it out.
What I observed after some debugging is that GetSamplingStrategy gRCP call from OTEL towards Jaeger all-in-one app does not return the default strategies, which should be applied to specific operation ("/metrics" in the example above) IF the service sampling strategy is "ratelimiting". The default operations level strategies are added as expected for "probablistic" service sampling strategies.
After investigating why that is so - I found the part which I believe is the cause of the problem, and since its much simpler to reproduce the issue via an updated unit test which is covering that specific situation - I went with that in the bug description, instead of the whole repo and the whole story of how the issue was reproduced.
Hope this clarifies things.
@kuujis I've been thinking how better to explain this issue. The way the Jaeger SDKs used to interpret a sampling configuration was like this:
defaultSamplingRate
is applied (along with lower/upper bounds).So in your example when one of the services defines rate-limiting strategy, as soon as the per-operation sampling strategy is added the rate-limiting section becomes completely irrelevant, but only for the SDKs that actually support per-operation. This could be a big unexpected breaking change for some existing installations where if they are currently getting rate limiting on a service (but with SDK capable of per-operation) then after this change the whole service will switch to default per-operation strategy, which could mean much fewer or much more traces, no way to tell.
I pretty much got similar impression, hence the comment in the original description on the possible "breaking of contract". The possibility of this having an unintended consequences is definitely worth considering. Maybe it would be a good idea to provide a temporary switch via env variable, which would disable the effects of the updated functionality, so that if there are issues - there would also be a quick "emergency" solution. The idea would be to make provide support for an upgrade flow which might look like this:
And then at the "n+1" version - flag is removed and the fix, even if it is a breaking change, becomes the new updated behavior. Not sure how much such an approach is worth it or would be welcome though.
@kuujis I think we should follow our CLI flags deprecation policy, which we can adapt as follows:
--sampling.strategies.bugfix-5270
, which if set to true
would enable the new behavior (this means the unit tests will need to test both old and new ones. I would probably just clone the original function with toBeRemoved prefix). By default this flag should be false
, but when false
it should log a warning that this behavior will be changed in the future (see policy link above for examples).Looks like a good plan, I'll add these.
Edit by @yurishkuro
Example: given this config file where services A and B do not define their own per-operation strategies: https://github.com/jaegertracing/jaeger/blob/34f6a251dc3b7915c2c2424efc343dc35fbd7c9c/plugin/sampling/strategystore/static/fixtures/service_no_per_operation.json#L1-L25
We expect ServiceA to have the following strategy:
defaultSamplingProbability
which is kept the same1.0
as at the service-levelWe expect ServiceB to have the following strategy:
Original ticket below:
What happened?
While investigating https://github.com/open-telemetry/opentelemetry-java/issues/5504 I was not able to identify any issues with handling of the responses received from jaeger for sampling strategies related to a given service name. However - I found that if service sampling strategy is "ratelimiting" then default operation sampling strategies were not being returned.
Steps to reproduce
Expected behavior
As implied in https://www.jaegertracing.io/docs/1.28/sampling/#collector-sampling-configuration I would expect the following changes to plugin\sampling\strategystore\static\strategy_store_test.go\TestServiceNoPerOperationStrategies unit test case should result in a passing test:
Relevant log output
Screenshot
N/A
Additional context
I believe this behaviour was introduced as part of https://github.com/jaegertracing/jaeger/pull/2230 as an outcome of https://github.com/jaegertracing/jaeger/issues/2171 discussion. this issue is created as per contribution guidelines. I'm working on setting up a test to verify the fix I made (https://github.com/kuujis/jaeger/pull/1) with some basic service setup and work through other requirements (first time contributing).
I would also appreciate feedback on the solution itself, since depending on how one looks at it - it might be considered breaking contract, especially since the code was not changed for a while. The alternative I was considering, but decided against was to call jaeger twice from OTEL library in order to get both "service" specific and the "default" configurations and then merge them, but it felt wrong, since I felt this is incorrect behavior on Jager side.
Jaeger backend version
v1.35.0
SDK
io.opentelemetry:opentelemetry-bom:1.35.0
Pipeline
OTEL SDK -> Jaeger.GetSamplingStrategy -> exporter to console
Stogage backend
No response
Operating system
No response
Deployment model
No response
Deployment configs
No response