Closed nhulston closed 3 weeks ago
maybe we could have an environment variable to enable buffering instead of being on by default?
IMHO, given that not using the buffer actually caused metrics dropping issue, it's safer to default to enabling the buffer. Then we can make it an numeric ENV if the customer want to set the buffer size to a specific number. be it higher or lower.
LGTM. I don't have any concern regarding the other questions. However, I think it's worth testing the following two cases
bufferFlushInterval
and metrics size < bufferbufferFlushInterval
: A lambda that runs longer than the bufferFlushInterval
and metrics size < buffer.Please just confirm those two cases have been tested before merging. Thanks.
LGTM. I don't have any concern regarding the other questions. However, I think it's worth testing the following two cases
- Flushing at the execution done case: A lambda that runs shorter than the
bufferFlushInterval
and metrics size < buffer- Flushing at the
bufferFlushInterval
: A lambda that runs longer than thebufferFlushInterval
and metrics size < buffer.Please just confirm those two cases have been tested before merging. Thanks.
Both cases work perfectly!
What does this PR do?
This PR adds buffering to our StatsD client to support higher volumes of custom metrics. This is accomplished by setting the
maxBufferSize
to a number (by default it is set to 0 for no buffering at all).When there is no buffer, we make a request (over localhost) to the Extension every time
sendDistributionMetric()
is called. When there is a buffer, we keep adding custom metrics to the buffer until we flush the buffer, which happens when (1) the buffer size is reached, (2) everybufferFlushInterval
milliseconds occurs (currently defaults to 1000ms), or (3) the Lambda function is done executing.The value of
maxBufferSize
is the size of the payload string, not the number of metrics in the buffer. See https://github.com/brightcove/hot-shots for more documentation.See the section at the bottom for how I calculated the buffer size. 8KB buffer size in memory is negligible in Lambda, considering the smallest memory size is 128MB.
Motivation
We have some customers who have opened support tickets complaining about metrics being dropped when sending high volumes (~300) of metrics. This is not an issue in other runtimes like Python, since Node is the only library that uses
hot-shots
.Testing Guidelines
With
maxBufferSize=8192
, we have no problem delivering all ~8000 metrics, but for smaller buffer size or larger number of metrics, we do see some getting dropped. This makes sense and aligns with the math (see below).Additional Notes - Areas for reviewers to focus on
Types of Changes
Check all that apply
Math
Some math to estimate the size of the buffer
maxBufferSize = 8192
:maxBufferSize = N/200*c
, whereN
is the max number ofsendDistributionMetric()
calls we want to support being made at once, andc
is the number of characters per metric (name + value + date + tags + other characters)c > 100
. For a liberalc=200
(since customers could have a lot tags) and N=1000, we getmaxBufferSize ≈ 1024
maxBufferSize = 1024
.c=200
, this is ~5 metrics per request, which defeats the purpose of having a buffer. Therefore, something likemaxBufferSize = 8192
should support ~40 metrics per request.