Open ValentinLvr opened 1 week ago
We changed the way TraceQL metrics work in Tempo 2.6 to base historical requests off of a set of RF1 blocks written to the backend by the metrics generators:
https://grafana.com/docs/tempo/latest/release-notes/v2-6/#operational-change-for-traceql-metrics
This will greatly improve TraceQL metrics speed, but will be a temporary increase in TCO due to additional blocks in the backend. We are attempting to address this holistically by rearchitecting Tempo around an RF1 architecture for both metrics and search.
Expect updates with the next few releases.
Thanks for the explanation !
I just set the flush_to_storage
parameter to true
and I'm, now, able to see historical data from the backend.
...
metrics_generator:
processor:
local_blocks:
flush_to_storage: true
...
Maybe it's worth mentioning it to the breaking changes part here ?
Yup. Good call out.
@knylander-grafana do you mind sneaking this in the breaking changes section when you get a chance?
Will do! Thank you, @ValentinLvr for the thorough issue!
Describe the bug
Context:
When using the traceQL metric feature in grafana, only the datapoints from metrics-generator are rendered. For example when querying something like
{resource.service.name="foo"} | rate()
, I'm only seeing the last 30mn. I tried differentquery_backend_after
options and, as expected, it seems the frontend doesn't retrieve the backend values.Didn't see anything in logs & traces. Also, I can see the traces in the S3 bucket so it seems the ingestion is working well. I tried using the metric query range api directly and I still miss all the backend values
To Reproduce Steps to reproduce the behaviour:
{resource.service.name="foo"} | rate()
Expected behaviour Histogram rendered by traceQL metric should be complete with values retrieved from the backend & metric generator
Environment:
Additional Context