Closed driv3r closed 4 years ago
This sounds really weird, could it be a coincident of something else?
No idea, some people on my team suggested that there could be some deadlock somewhere? but I personally don't have neither python nor golang experience in order to play around and try to debug this. I could try bringing back old memory setup if you would have some ideas for getting debugging info that could help finding the issue
@driv3r sorry for not following up here, but I believe we have fixed the issue since https://github.com/DataDog/datadog-serverless-functions/releases/tag/aws-dd-forwarder-3.16.0
Describe what happened: After setting up log forwarder lambda, with defaults from cloud formation template - memory at 1024MB - we saw that it takes 10+ seconds in order to compute and forward APM traces alone. This was also boosting the cost of the forwarder.
We saw that it only uses around 130MB so we reduced the reserved memory down to 256MB in order to cut costs. This brought unexpected results: runtime went down as well. You can see it on the screenshot from CloudWatch Metrics:
Describe what you expected: I would expect that with more memory (and thus more vCPU) it should run faster, not slower. Is it only our single case?
Steps to reproduce the issue: