Closed xitep closed 8 years ago
You are correct, that appears to be a bug. The max_memory is supposed to allow us to cap the total memory consumption of the Histogram prior to allocation, returning a None instead of initializing the histogram. It looks like it should be 8 * buckets_total + mem::size_of::<HistogramConfig>() + mem::size_of::<HistogramProperties>() + mem::size_of<HistogramCounters>()
many thanks for your response. i think the check should then be merely on mem::size_of::<u64>() * buckets_total
since only the "data vector" is the dynamic data which is being prevented to eat up too much memory and is allocated on the heap, while the rest is constant and very likely to be stored on the stack by client code. what do you think?
I suspect that would be acceptable for most use-cases. I'm not aware of anybody using this currently, but it's clear the current math overstates the size of the Histogram.
thank you.
Hello, this is more of a question rather than an issue:
I'm not understanding why in Histogram::configured we validate
buckets_total * mem::size_of::<HistogramBucket>()
. Isn't the histogram's data stored as au64
per bucket (while aHistogramBucket
has 24 bytes?) What is the exact idea behindconfig.max_memory
?Many thanks in advance, P.