jaegertracing / jaeger

CNCF Jaeger, a Distributed Tracing Platform
https://www.jaegertracing.io/
Apache License 2.0
20.35k stars 2.43k forks source link

[storage] limitations of Cassandra search on LIMIT and complex queries #166

Open vprithvi opened 7 years ago

vprithvi commented 7 years ago

When querying for traces using serviceName, operationName and a tag with the default LIMIT of 20, some results might be omitted.

This is because of this logic which does the following:

  1. Retrieve all traceIDs matching the operation name
  2. Retrieve all traceIDs matching tags
  3. Intersect 1 & 2

Because Cassandra doesn't guarantee ordering, this could eliminate results.

I propose that we do the following instead (or in addition to what we do now),

  1. Retrieve all traceIds matching tags
  2. Filter by operation name

The reason for retrieving traceIds matching tags first targets the use case when somebody is searching for a jaeger-debug-id or some other tag with low cardinality, guaranteeing them a result when it exists.

yurishkuro commented 7 years ago

This is a hack that will work for very sparse tags like jaeger-debug-id, but will not work in other cases, e.g. when searching by a tag like "error=true" or http.status_code, because "retrieve all traces" becomes impossible due to volume.

vprithvi commented 7 years ago

I see value in having this hack with the limit parameter (in addition to the current behavior) so that it still retrieves results for low cardinality tags.

Dieterbe commented 6 years ago

I have a problem and I'm not sure if it's the same as what's discussed here. (the descriptions of current and desired logic don't mention how the limit comes into play) but what i'm seeing is that much fewer results are being returned when doing a tag search, i have to "artificially" raise the limit to get the amount i want. (e.g. with limit 20 i may get 1 result, with limit 200 i get 26 results), but the problem is only when doing tag searching, it's fine if i don't have a tag clause in the query.

yurishkuro commented 6 years ago

this is a known (and hard to solve) issue in Cassandra storage implementation

rbtcollins commented 6 years ago

So, this may be hard to solve, but I want to suggest that its critical to usability: its a non-obvious limitation that will cause lots of head-scratching and push-back from users of a deployment.

Can you perhaps detail the Cassandra limitations that drive this behaviour somewhere? Also, what is the recommended backend? We went with Cassandra because the Uber blog post suggests Uber is running Cassandra :)

yurishkuro commented 6 years ago

"its critical to usability" - fwiw Zipkin lives with the same limitation for years. You need to bump the number if you need more exotic searches. We could rename it from LIMIT to something amorphous like "search depth" in the UI.

I wouldn't say Cassandra is the recommended backend, it's mostly an operational preference for people. But Elastic doesn't have that LIMIT problem because of how ES itself implements it (fanout to all nodes where each node returns LIMIT results). The benefit of Cassandra is higher throughput.

The main issue for say a query with two tags is that we're maintaining exact-match Cassandra indices, e.g. {service-name}-{tag-key}-{tag-value} => {trace-id}. So if you do a search by two tags, we execute two queries, both with the LIMIT provided in the input, and then intersect the resulting sets of trace IDs. Cassandra 3.4+ supports SASI indices that we thought would address this issue (they sort of work similar to ES and you need to fan out request to all nodes in the cluster), but their performance turned out to be even worse than ES, and not just on writes, but on reads (update: it is possible we didn't use it correctly).

We've discussed a possible hack of repeating queries for each tag by gradually increasing LIMIT for each query until the intersection is also of LIMIT size. Never had a chance to try to implement it, not even sure how well it would work.

So in summary - we have no plans to fix this just yet. Silver lining - we're looking into other solutions based on aggregations that could make the whole point of searching for individual traces less important.

Dieterbe commented 6 years ago

I do agree that it's a usability problem. It's easy to forget about this limitation and then people run into issues like: 1) un-filtered search -> get results, look at a trace. copy paste a tag 2) search for the tag 3) no results ??

this makes jaeger look like an unreliable piece of software and people don't want to use it.

gouthamve commented 6 years ago

Hi, this is causing a lot of inconsistent results, and not giving me what I want. See the behaviour here: https://youtu.be/m7qZJIyCmGY

Essentially, this is giving me only the last 3-4 traces and the inconsistency b/w queries is worrying especially because all the spans might not yet be added to the traces being shown and I'd want to see older traces.

Could we atleast show a warning that the results will be off and point to this ticket maybe? Quite frankly, I wouldn't have been able to find this issue (thanks @Dieterbe for the pointer).

tiffon commented 6 years ago

@gouthamve, thanks to you and the others on this thread for calling out this issue.

This is definitely a severe issue, and it's great to know the extent it's affecting you.

I'll break the problems described into two broad categories:

  1. Challenges with search in Cassandra
  2. Challenges with results containing incomplete traces

For # 1, we have two tracks for addressing it. For the longer-term, we're currently prototyping a more robust (and expressive) search, and we expect to be able go live with it by the end of the year. It should be able to address 1 as well as lay the ground work for looking at aggregated data. In the shorter term, we're looking at ways to keep users more informed about the limitations of the Cassandra search. To this end, we created Inform users of jaegertracing/jaeger#166 when Cassandra is the backing store.

The UI ticket (ui-243) is definitely not a solution, but would you say it would have been helpful to be aware that it's a known issue?

Determining a resolution to # 2 is still a work in progress. One of the main challenges is, it's impossible to know, with 100% certainty, when a trace is complete. One approach we're considering is to show the number of spans associated with a trace when viewing search results and to update that number, in real time. The idea being if that number goes up while a user is viewing the search results, then the trace is probably not complete. But, whether this is the right approach or not is still TBD. I wish I had better news, on this front.

Lastly, your feedback is super useful; thanks again for letting us know this came up in a severe fashion.

rbtcollins commented 6 years ago

Re: 2 - is there a separate ticket for that? I have some thoughts but don't think this is the right ticket.

tiffon commented 6 years ago

@rbtcollins Great! Currently, we don't have a ticket for issues around incomplete traces. Can you start one to capture your thoughts?

yurishkuro commented 5 years ago

I can see three things we could do here (higher priority first):

We should discuss it at the next project call next Friday.

dobegor commented 5 years ago

Is there any progress regarding this? ES users still can't specify tags alongside minDuration.

yurishkuro commented 5 years ago

this might be fixed by #1477, once released