Open ankitnayan opened 11 months ago
Joins can be done using clickhouse query though the performance won't be good if enough filtering is not done as join between huge tables will be slow and might throw memory errors in clickhouse.
I tried writing below queries for traces and something similar for logs should also work.
WITH A AS
(
SELECT traceID, durationNano
FROM signoz_traces.distributed_signoz_index_v2
WHERE ((stringTagMap['param.location']) = '728,326') AND (timestamp > (now() - toIntervalMinute(30)))
), B AS
(
select traceID, serviceName FROM signoz_traces.distributed_signoz_index_v2 WHERE timestamp > (now() - toIntervalMinute(30))
)
SELECT avg(A.durationNano), B.serviceName from A INNER JOIN B ON A.traceID=B.traceID group by B.serviceName;
2. Timeseries Panel
WITH A AS ( SELECT traceID, durationNano, timestamp FROM signoz_traces.distributed_signoz_index_v2 WHERE ((stringTagMap['param.location']) = '728,326') AND (timestamp > (now() - toIntervalMinute(30))) ), B AS ( select traceID, serviceName FROM signoz_traces.distributed_signoz_index_v2 WHERE timestamp > (now() - toIntervalMinute(30)) )
SELECT count(A.traceID), B.serviceName, toStartOfInterval(A.timestamp, toIntervalMinute(1)) AS interval from A INNER JOIN B ON A.traceID=B.traceID group by (interval, B.serviceName);
https://www.datadoghq.com/blog/logs-transactions-queries/
https://community.splunk.com/t5/Splunk-Search/Joining-two-log-files-that-have-a-common-field/m-p/83747