jaegertracing / jaeger

CNCF Jaeger, a Distributed Tracing Platform
https://www.jaegertracing.io/
Apache License 2.0
19.9k stars 2.37k forks source link

Discuss post-trace (tail-based) sampling #425

Open yurishkuro opened 6 years ago

yurishkuro commented 6 years ago

Booking this as a placeholder for discussions about implementing tail-based sampling. Copying from our original roadmap document:

Instead of blindly sampling a random % of all traces, we want to sample traces that are interesting, which can mean high latency, unexpected error codes, equal representation of different latency buckets, etc.

The proposed approach is to configure client libraries to emit all spans over UDP to the local jaeger-agent running on the host, irrespective of any sampling. The local jaeger-agents will store all spans in-memory and submit minimal statistics about the spans to the central jaeger-collectors, using consistent hashing, so that statistics about any given trace is assembled from multiple hosts on a single jaeger-collector. Once the trace is completed, the collector will decide if that trace is “interesting”, and if so it will pull the rest of the trace details (complete spans, with tags, events, and micro-logs) from the participating jaeger-agents and forward it to the storage. If the trace is deemed not interesting, the collector will inform jaeger-agents that the data can be discarded.

This approach will require higher resource usage. However, the mitigating factors are:

Update 2020-02-05

We're currently working on a prototype of "aggregating collectors" that will support tail-based sampling. Note that OpenTelemetry Collector already implements a simple version of tail-based sampling, however it only works in a single-node mode, whereas the work we're doing allows to shard traces across multiple aggregators, which allows to scale to higher levels of traffic. @vprithvi is giving regular updates from this work in the Jaeger Project status calls.

jpkrohling commented 6 years ago

It seems that networking is the main driver for this decision to happen at the agent level (as the collector can scale horizontally), but would be nice to document the why behind this feature.

Span reporting is done over UDP via loopback interface to the agent on the same host

I think we shouldn't make this as a hard assumption. Rather, assume that the agent is "close enough", like in a sidecar when doing a container deployment (second container in a pod, in Kubernetes terms).

Other than that, looks sane to me :)

vprithvi commented 6 years ago

The proposed approach is to configure client libraries to emit all spans over UDP to the local jaeger-agent running on the host, irrespective of any sampling.

I propose some refinements. Instead of going all or nothing, I think we should use adaptive sampling to select a sample of spans to emit. Ideally, we can go all the way to 100pc, but having some head based sampling could provide an upper bound on the resources used by the instrumented process.

The local jaeger-agents will store all spans in-memory and submit minimal statistics about the spans to the central jaeger-collectors

Similar to @jpkrohling's comment, this makes the assumption that sending Spans could saturate links between jaeger-agent and jaeger-collector. This may not be true in many modern data centers. GCP, for instance, allows for 2Gbps of egress throughput per vCPU core.

In that case, we might be able to send spans to jaeger-collector and deal with provisioning memory there instead. This also works better in the case of functional architectures where one might not be able to easily run a sidecar.

Note that storing things in memory makes them susceptible to loss during instance failures; deploys should also wait for all spans to be drained from memory. However, this might be an acceptable trade off for observability data like spans.

I think we should identify which of the following is the bottleneck for most services to better our design:

yurishkuro commented 6 years ago

@vprithvi yes we need to profile all three.

I do not see tail-based sampling as a replacement for adaptive sampling, they can co-exist. Adaptive loop drives head-based sampling that ensures continuous probing of the architecture that is useful for capacity planning, blackbox text coverage, etc. Tail-based sampling is useful for capturing anomalies. We can run both of those at the same time.

otisg commented 6 years ago

Looks like X-Trace team at Brown is looking into this as well: https://groups.google.com/forum/?#!msg/distributed-tracing/fybf1cW04ZU/KhcF5NxTBwAJ

Sending all traces to Collector sounds scary me. We run a multi-tenant (tracing) SaaS and run Collector on our side, while our users run clients and agents on their infra. I would think/hope the post-trace sampling could be done close to the origin - in agents themselves.

yurishkuro commented 6 years ago

@otisg yes, the plan is to store the spans in memory on the agents or a central tier that is on-premises, close to the application itself.

maniaabdi commented 5 years ago

Do we have any implemented version of tail sampling with Jaeger?

isaachier commented 5 years ago

Not yet. On our to do list for sure.

rwkarg commented 5 years ago

The new Kafka backend would be interesting for this. All spans get reported, one job could aggregate metrics across all spans, another aggregate dependency metrics, another could sample windows and sample traces for ES/Cassandra storage (outliers plus some sample of "normal" calls)

yurishkuro commented 5 years ago

Kafka is ok post-collection / post-sampling. For our use case we cannot scale Kafka to collect all spans before sampling.

jewelzhu commented 5 years ago

We are very interested in this post-trace sampling feature, and I have a question: How to determine if a trace is finished? When a collector receives some spans of a trace including the root span, it cannot know if there are some FollowsFrom spans still not finished.

rstraszewski commented 5 years ago

We are also interested in this feature. Ability to always log error spans, and sample the rest, would be great. Is this feature still on your to-do list?

yurishkuro commented 5 years ago

It is, but very likely we'd simply use the implementation in opencensus collector that already supports it for single node.

jpkrohling commented 5 years ago

@yurishkuro should this be closed then?

yurishkuro commented 5 years ago

I wouldn't, until we have some concrete results / guidance.

jpkrohling commented 5 years ago

I wouldn't, until we have some concrete results / guidance.

What would be necessary to close this? Get a PoC using OC in the client doing tail-based sampling?

yurishkuro commented 5 years ago

Just looking at my internal document, these are the questions we want from the POC:

Stono commented 5 years ago

Literally just sat in the office talking about our dilemma of wanting to capture all errors, but sample others. But we can't sample everything, as it's too much data.

vikrambe commented 4 years ago

We are also interested in this feature. Ability to always log error spans, and sample the rest, would be great. Is this feature still on your to-do list?

I think we should allow sampling based on SpanTag and attach a sample %. For example we would want to capture all the traces with error=true 100% and http.status_code 200 10%

Stono commented 4 years ago

Error wouldn’t work at a trace level in a microservice architecture as the spans before your error app will have already been reported to jäger based on their sample rate

william-tran commented 4 years ago

With tail sampling implemented in otel-collector, and jaeger kafka recevier and exporter just implemented in #2221 and #2135, would this work and be HA? jager-client -> otel-collector (http receiver, jaeger_kafka exporter) -> otel-collector (jaeger_kafka receiver, tail sampler). You're using kafka as the "traceID aware load balancer" because spans are written to the topic keyed by traceId.

yurishkuro commented 4 years ago

Kafka stores data on disk. If you can afford storing all trace data on disk before sampling, you can probably also afford storing all of it after sampling, making tail sampling unnecessary.

wyattanderson commented 4 years ago

Kafka stores data on disk. If you can afford storing all trace data on disk before sampling, you can probably also afford storing all of it after sampling, making tail sampling unnecessary.

You'd be storing all trace data on disk, sure, but for a much shorter period of time than you'd actually store traces in Elasticsearch or Cassandra, with a linear access pattern (suitable for magnetic storage), and very little processing overhead (i.e. no indexing), essentially only serving as an HA buffer in front of the collector. I'll admit, I don't have hyperscale experience and it's possible I'm missing something obvious, but I imagine there might be a number of scenarios where this approach would be valid.

yurishkuro commented 4 years ago

That's fair, short retention and raw bytes access pattern certainly make Kafka more forgiving of high data volumes. As for being an "HA buffer", or more precisely as a form of sharding, it's feasible, however with the default setup that ingesters have today, nothing prevents Kafka from rebalancing the consumers at any time, so it's not quite the stable sharding solution that you probably want. Another alternative is to manually partition traces across differently named topics, and setup ingesters to consume from different topics, to avoid rebalancing from the brokers.

JasonChen86899 commented 3 years ago

Update 2020-02-05

We're currently working on a prototype of "aggregating collectors" that will support tail-based sampling. Note that OpenTelemetry Collector already implements a simple version of tail-based sampling, however it only works in a single-node mode, whereas the work we're doing allows to shard traces across multiple aggregators, which allows to scale to higher levels of traffic. @vprithvi is giving regular updates from this work in the Jaeger Project status calls.

Are there any updates on "aggregating collectors" or just use OpenTelemetry Collector? I have just added this function based on master branch including client lib. I don’t know if I did repetitive work. Thx

jpkrohling commented 3 years ago

I'll let @vprithvi share an update on this side of the game, but from OpenTelemetry's side, we are very close to getting all the pieces for a scalable tail-based sampling. The final piece is probably this:

https://github.com/open-telemetry/opentelemetry-collector/issues/1724

JasonChen86899 commented 3 years ago

open-telemetry/opentelemetry-collector#1724

Yeah, great! A good design for tail-based sampling. I designed another implementation without a loader balancer. Record upstream and downstream ips that bound with span tags

vprithvi commented 3 years ago

@JasonChen86899 @jpkrohling - unfortunately no new development has been happening on Jaeger at Uber since late March. We were able to get a proof of concept running before then which assumes that it is running on Uber infra, so it's unlikely to be open sourced soon.

JasonChen86899 commented 3 years ago

@vprithvi - have done some changes about tail-based.hope your suggestions jaeger: https://github.com/JasonChen86899/jaeger/tree/tail_based_sampling jaeger-client-go: https://github.com/JasonChen86899/jaeger-client-go/tree/tail_based_sampling

yurishkuro commented 3 years ago

@JasonChen86899 could you outline the changes?

JasonChen86899 commented 3 years ago

@JasonChen86899 could you outline the changes?

@yurishkuro @vprithvi

  1. jaeger-client-go: a. Add functions which handle with span tags with rpc error or other inner error or http status=5xx b. Add functions which baggage upstream ip and downstream ip. c. Update report span process. When span finished, check tags and baggages, then send to agent

  2. jaeger: a. Update agent process. When tags has special value, then push to collector otherwise put into buffer maintained few time(time wheel buffer) b. Update collector process. When collector receive special spans, then got downstream ip form them for downstream spans, then loop query until trace end.

heruv1m commented 3 years ago

Is there any alive version of tail based sampling?

jpkrohling commented 3 years ago

No, unfortunately not as part of the Jaeger project.

jkowall commented 3 years ago

Otel tail-based sampling and filtering is working great these days thanks to all of your work on that @jpkrohling I know of many users who have implemented it so far.

jdomag commented 1 year ago

Is there any update on this or this idea is dead?

88vishnu commented 6 months ago

Is tail based sampling available in jaeger-collector? If yes then please provide me list of configuration parameters. Thanks

yurishkuro commented 6 months ago

Tail based sampling today is supported by opentelemetry collector. Once Jaeger-v2 is released the same plugin would be usable directly in Jaeger.

88vishnu commented 6 months ago

Thanks