Open huonw opened 10 months ago
As a workaround, we might have to downgrade to arn:aws:lambda:ap-southeast-2:451483290750:layer:NewRelicPython310:1
, but that's seems like it would be quite old. Are there change logs for each layer version so we can work out the risks?
For a really simple lambda:
print("imported")
def handler(*args):
print("started")
The numbers are a fair bit smaller, and somewhat different (regression from v1 -> v2, and v15 -> 16), but still bad:
version | time from AWS START to print("started") |
---|---|
17 | 1.5-1.6 |
16 | 1.5-1.6 |
15 | 0.5-0.6 |
... | (similar) |
2 | 0.5-0.6 |
1 | 0.2 |
There's also ~0.6s of init time overhead on a cold start, from INIT_START
to print("imported")
, which seems to be constant across all versions.
That is, a cold start of a lambda with New Relic instrumentation seems to spend a fair chunk of time (>2s) on the New Relic overhead.
Observing the same slowdown (from few hundreds of milliseconds to 8-15 seconds) for arn:aws:lambda:eu-central-1:451483290750:layer:NewRelicPython311:11
I've been communicating with New Relic support and they suggested setting
NEW_RELIC_PACKAGE_REPORTING_ENABLED: false
https://docs.newrelic.com/docs/apm/agents/python-agent/configuration/python-agent-configuration/#package-reporting-enabled
To bypass this (new) code: https://github.com/newrelic/newrelic-python-agent/blame/2b14392a19517a20012d281fbaaedfc2497f4fc3/newrelic/core/environment.py#L207-L245
This makes a big difference to us. The start time is now more 0.1s, rather than 1.5s (for the reduced lambda) or 5s (for our real ones).
Description
We recently upgraded our New Relic layers, and this appears to have resulted in a massive slow-down in our cold start times, with the time from AWS's
START
log line to "our code executing" going from 100-200ms to 5-10s, and this is entirely with New Relic.For instance, we don't see this slowdown if we set the lambda to call our handler directly (not going via
newrelic_lambda_wrapper.handler
+NEW_RELIC_LAMBDA_HANDLER
).We're in
ap-southeast-2
(Sydney), and are using layers likearn:aws:lambda:ap-southeast-2:451483290750:layer:NewRelicPython310:17
. Comparing versions (i.e. changing17
) with a lambda with 256MB of memory, we find the interesting start times are:In summary:
The final result is still a massive slowdown from where it used to be. This causes issues for us. For instance, lambda hooks for cognito have a strict 5 second time limit... and the initialisation time alone exceeds that, even without running any real code (we do use provisioned concurrency, but we want to have functional Lambdas, even if we get a spike in load that exceeds the provisioning).
Steps to Reproduce
START
and the printing of the real codeExpected Behaviour
Start-up times like they used to be: a few milliseconds, not a few seconds.
Relevant Logs / Console output
Using one of the layers between version 8 and 13, with a lambda that has
print("Starting function...")
at the top of the real handler, the CloudWatch logs include something like:Note how the start line is at
28:56.259
while theprint
happens at29:06.353
, more than 10s later.Your Environment
AWS Lambda, Python 3.10, layer versions above.
Additional context
I imagine this may be caused by changes to the underlying agents, but I can't work out how to find the version of the agent that's included in a particular layer, e.g. it doesn't appear to be logged by the init routine.
https://forum.newrelic.com/s/hubtopic/aAXPh0000000cJpOAI/new-relic-python-lambda-layers-performance-has-regressed-massively