Closed jml closed 7 years ago
From @jml
Why should it be 1hr?
From @tomwilkie
We should investigate what the right number is, but the parameters are:
Ticket should really say "max 1hr" to bound the loss, if that give good utilization
This is possibly related to the dynamo errors we are seeing in #85
Oh wow yeah, the default chunk max age of 10 minutes seems way too low. I'm wondering why we're still achieving such decent chunk utilization (sum(cortex_ingester_chunk_utilization_sum) / sum(cortex_ingester_chunk_utilization_count)
is around 0.43
) with such a low max age. Under certain circumstances, chunks can last for hours or days, so maybe it's the frequent scraping plus noisiness of the data that makes the chunks fill up that fast. Still, I would set the max age to an hour or so (as you said, it depends a bit on our risk profile, of course).
I suspect it can't flush chunks quickly enough, and therefore they are getting more than 10mins worth of data.
On Wednesday, 2 November 2016, Julius Volz notifications@github.com wrote:
Oh wow yeah, the default chunk max age of 10 minutes seems way too low. I'm wondering why we're still achieving such decent chunk utilization ( sum(cortex_ingester_chunk_utilization_sum) / sum(cortex_ingesterchunk utilization_count) is around 0.43) with such a low max age. Under certain circumstances, chunks can last for hours or days, so maybe it's the frequent scraping plus noisiness of the data that makes the chunks fill up that fast. Still, I would set the max age to an hour or so (as you said, it depends a bit on our risk profile, of course).
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/weaveworks/cortex/issues/11#issuecomment-258033590, or mute the thread https://github.com/notifications/unsubscribe-auth/AAbGhYFnewSerf7ltmWR169MdiiOQ0bfks5q6SEHgaJpZM4J5RTd .
I suspect it can't flush chunks quickly enough, and therefore they are getting more than 10mins worth of data.
At least the failures should not have a big effect because during normal operation, only ~4% of chunk puts fail (sum(rate(cortex_ingester_chunk_store_failures_total[1m])) / sum(rate(cortex_ingester_chunk_utilization_count[1m]))
-> 0.043). Maybe general latency in non-failed puts delays things somewhat, but the effect cannot be huge, as sum(cortex_ingester_memory_chunks) / sum(cortex_ingester_memory_series)
shows us that there's just 1.12 chunks per series in memory at a given time (there's always at least one open head chunk for active series).
Actually make sense, since we're on doubledelta (not varbit). So its about 3.3 bytes per sample, at 15s scrape interval == about 20mins per chunk. With 10mins, you'd expect 50% utilisation.
Hmm, how do you get to 20 mins per chunk at 15s scrape interval and 3.3 bytes per sample? 1024 / 3.3 = 310 samples per chunk, but 20 minutes of samples would only be 4 * 20 = 80 samples? So a chunk should be full after ~ 310 / 4 = 77 minutes. Or am I missing something stupid?
Nope, I was being stupid. I did 300/15 not 300*15.
Okay, bit more progress: 99th percentil chunk "age" is 27mins on flush. This could explain the higher utilisation. Just added a dashboard for it, will link to it when it live.
http://frontend.dev.weave.works/admin/grafana/dashboard/file/cortex-chunks.json
So, the question is why are some chunks 27mins old?
Thoughts:
Except:
Average number of entries per chunk is 8.6 here
And its no coincident that 8.6 * 3min is 27mins - which is the 99%ile chunk age...
With the latest change, we may be writing chunks more than once. Needs fixing.
Set to 1hr and behaving as expected in #118
From @tomwilkie
Currently 10mins, should be 1hr.
Copied from original issue: tomwilkie/frankenstein#10