DataDog / datadog-lambda-js

The Datadog AWS Lambda Library for Node
Apache License 2.0
113 stars 35 forks source link

Enable metric buffering #586

Closed nhulston closed 3 weeks ago

nhulston commented 3 weeks ago

What does this PR do?

This PR adds buffering to our StatsD client to support higher volumes of custom metrics. This is accomplished by setting the maxBufferSize to a number (by default it is set to 0 for no buffering at all).

When there is no buffer, we make a request (over localhost) to the Extension every time sendDistributionMetric() is called. When there is a buffer, we keep adding custom metrics to the buffer until we flush the buffer, which happens when (1) the buffer size is reached, (2) every bufferFlushInterval milliseconds occurs (currently defaults to 1000ms), or (3) the Lambda function is done executing.

The value of maxBufferSize is the size of the payload string, not the number of metrics in the buffer. See https://github.com/brightcove/hot-shots for more documentation.

See the section at the bottom for how I calculated the buffer size. 8KB buffer size in memory is negligible in Lambda, considering the smallest memory size is 128MB.

Motivation

We have some customers who have opened support tickets complaining about metrics being dropped when sending high volumes (~300) of metrics. This is not an issue in other runtimes like Python, since Node is the only library that uses hot-shots.

Testing Guidelines

With maxBufferSize=8192, we have no problem delivering all ~8000 metrics, but for smaller buffer size or larger number of metrics, we do see some getting dropped. This makes sense and aligns with the math (see below).

Additional Notes - Areas for reviewers to focus on

  1. Since this is an issue that affects very few users, maybe we could have an environment variable to enable buffering instead of being on by default?
  2. Are there other implications I'm missing that these changes have? I want to make sure this won't have concerns like extra overhead or higher cold start times. However, I don't think such a small buffer would have much of an impact, and we're actually making less requests over localhost, which should slightly reduce execution time.

Types of Changes

Check all that apply

Math

Some math to estimate the size of the buffer maxBufferSize = 8192:

joeyzhao2018 commented 3 weeks ago

maybe we could have an environment variable to enable buffering instead of being on by default?

IMHO, given that not using the buffer actually caused metrics dropping issue, it's safer to default to enabling the buffer. Then we can make it an numeric ENV if the customer want to set the buffer size to a specific number. be it higher or lower.

joeyzhao2018 commented 3 weeks ago

LGTM. I don't have any concern regarding the other questions. However, I think it's worth testing the following two cases

  1. Flushing at the execution done case: A lambda that runs shorter than the bufferFlushInterval and metrics size < buffer
  2. Flushing at the bufferFlushInterval: A lambda that runs longer than the bufferFlushInterval and metrics size < buffer.

Please just confirm those two cases have been tested before merging. Thanks.

nhulston commented 3 weeks ago

LGTM. I don't have any concern regarding the other questions. However, I think it's worth testing the following two cases

  1. Flushing at the execution done case: A lambda that runs shorter than the bufferFlushInterval and metrics size < buffer
  2. Flushing at the bufferFlushInterval: A lambda that runs longer than the bufferFlushInterval and metrics size < buffer.

Please just confirm those two cases have been tested before merging. Thanks.

Both cases work perfectly!