DataDog / datadog-lambda-go

The Datadog AWS Lambda package for Go
Apache License 2.0
59 stars 40 forks source link

Flush Metrics #139

Closed pixie79 closed 9 months ago

pixie79 commented 1 year ago

I am using the DD Metrics function to publish metrics scraped from a Prometheus endpoint over to Datadog via a lambda.

Is there a way to flush the metrics into batches as in each scrape I am getting around 35,000 metrics which If I just call

        ddLambda.Metric(
            v.Name,  // Metric name
            v.Value, // Metric value
            tags..., // Associated tags
        )

Then I get the error - "datadog: failed to flush metrics to datadog API: with no retry: Failed to send metrics to API. Status Code 413 Along with Payload to large.

Thanks :)

tianchu commented 1 year ago

@pixie79 I'm not sure our solution was designed to support that use case. Just curious, https://www.datadoghq.com/blog/monitor-prometheus-metrics/ doesn't work for you?

pixie79 commented 1 year ago

Sorry not in our situation as we have a no internal apps so we cant run the datadog agent :(

tianchu commented 1 year ago

@pixie79 Sorry for the late reply. I assume you are sending the metrics directly to the API? Perhaps you can try sending metrics over the Datadog Lambda Extension instead? I think the Extension does aggregate the metrics before sending them to the API, and in general it is more optimized for handling large volume of data.

pixie79 commented 1 year ago

Hi,

No I am using the DataDog-lambda-go package which I think is the extension?

Mark

tianchu commented 1 year ago

Do you have a Lambda layer named with Datadog-Extension installed to your Lambda function?

pixie79 commented 1 year ago

Hi,

Sorry, yes i have checked that layer is present as well.

Regards\ Mark

tianchu commented 9 months ago

@pixie79 I'm so sorry that I missed your last reply and dropped the ball here. I hope that you have figured out a solution on your own. If not, could you open a support ticket for follow up? Github issue is apparently not a great place for back and forth discussions. I'm closing this issue in favor of following up over our support channel. Again, sorry for the super late response.