Open anarcher opened 3 weeks ago
Hello, @anarcher! I'm here to help with any bugs, questions, or contributions you have. Let's tackle this issue together!
The difference between raw metrics and downsampling metrics in Thanos could be due to the presence of empty chunks in the series, as indicated by the skip series warn log related to "empty chunks" in the kube_pod_info
series. During the downsampling process, Thanos skips empty XOR chunks, which can lead to discrepancies between raw and downsampled data [1].
To address this, you should check for the presence of empty chunks in the kube_pod_info
series and ensure that the data is correctly populated before downsampling. This might involve verifying the data ingestion process or checking for any issues in the data source that could lead to empty chunks being created. Additionally, ensure there are no overlapping blocks with the same external labels and time periods, as this can also affect the downsampling process [2].
To continue talking to Dosu, mention @dosu.
Thanos, Prometheus and Golang version used: thanos:0.36.1
Object Storage Provider: S3
What happened: There is a difference between the raw metrics and downsampling metrics as follows. (I couldn't see any particular issues in compaction.) Could there be a reason for this difference? Is there any specific area I should check?
kube_pod_info had the following skip series warn log:
running thanos tools bucket verify does not report any issues for the downsampled block
What you expected to happen: The trend in both the raw data and the downsampled data is similar.
How to reproduce it (as minimally and precisely as possible):
Full logs to relevant components:
Anything else we need to know: