Open mehta-ankit opened 3 years ago
I am not against supporting this in theory, but the UI change needs to be done in a such a way as to not make the more standard workflow more complicated.
@mehta-ankit Is there an example you can provide that shows this is feasible without adding complexity? I haven't seen a UI where you can go down to specifying microseconds. You likely should be doing more filtering if you need that level of granularity IMO.
(Not affiliated with the author of this issue, so this is just another opinion.) Concerning the UI: In Kibana/OpenSearch, the start/end date of a search for log entries can be specified with millisecond granularity (the date/time chooser fills a textfield with e.g. "Sep 23, 2021 @ 10:00:00.000" and you can edit the text field content to set the seconds/milliseconds). In the Jaeger UI, the time field currently contains "00:00" initially. FMPOV, I see no downsides in changing this to "00:00:00.000" and allowing seconds and milliseconds to be edited as well. (I personally see no need to go for microseconds granularity here.)
Regarding a use case for this: For certain customers, we may produce a lot of traces (to help debugging - we enforce the sampling of traces for them here). This means there may be more than 1500 traces per minute concerning this one customer, i.e. the result list in the Jaeger UI contains that many entries after having applied filter criteria already. Now when we want to analyze a request by such a customer that happened at a certain point in time, say at 10:00:00.001 (and for which the trace may not even contain an "error=true" tag as extra possible filter criterion), we have to set the "Limit Results" number to the whole number of traces created for that customer per minute so that we can scroll to the bottom of the list to hopefully find the trace there. And if there are more than 1500 traces, we can't get to the trace at all (without fiddling with URL params). So having more fine grained start/end input fields would help a lot here.
Hi, May I know when this issue would be addressed? We are also struck with the same issue, when Jaeger is dealing with high volumes of data, setting minimum Time interval to 1 minute does not make any sense to view all the logs. Can you please take this on priority as its already supported in Kibana.
Requirement - what kind of business use case are you trying to solve?
On the UI when selecting custom date range, I'd like to be able to select finer time intervals to query between, down to the second or millisecond or even microsecond. There were 1000s of errors and I wanted to find the trace of the first one. They all happened within one minute so that either meant doing a very very large query or narrowing down the time to find the first one
Problem - what in Jaeger blocks you from solving the requirement?
Currently one can only select a date and time up to the minute (when using custom date range time dropdown) , even though the underlying API calls use epoch timestamp of 16 digits (which is in microseconds) for
end
andstart
params. Example: https://jaeger-ui-test.com/search?end=1618923840000000&limit=20&lookback=custom&maxDuration&minDuration&service=jaeger-query&start=1618891200000000Although workaround is to edit the URL in the browser to use specific timestamps for start and end params.
Proposal - what do you suggest to solve the problem or improve the existing situation?
Having the ability to select second and millisecond would help.
Any open questions to address