aws / aws-lambda-dotnet

Libraries, samples and tools to help .NET Core developers develop AWS Lambda functions.
Apache License 2.0
1.57k stars 479 forks source link

Lambda memory consumption issues #336

Closed ljacobsson closed 3 years ago

ljacobsson commented 5 years ago

Recently we have been experiencing an increase in constantly growing memory usage of certain functions, resulting in Process exited before completing request and a new lambda instance.

We first noticed this on 2018-09-11 at 12:43:06. Shortly before that we had made a deployment changing the runtime from dotnetcore2.0 to 2.1. No other significant changes were made to the code that could cause a memory leak and it was running without errors before that deployment.

I've reproduced the error with the following code:

public class DynamoDBTrigger
{
    private IAmazonDynamoDB _ddbClient;

    public DynamoDBTrigger()
    {
        _ddbClient = new AmazonDynamoDBClient();
    }

    [LambdaSerializer(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]
    public void Process(DynamoDBEvent ev, ILambdaContext context)
    {
        foreach (var record in ev.Records)
        {
            var item = Convert(record.Dynamodb.NewImage);
            context.Logger.Log(JsonConvert.SerializeObject(item));                
        }
    }

    private Item Convert(Dictionary<string, AttributeValue> attributeMap)
    {
        using (var context = new DynamoDBContext(_ddbClient))
        {
            var doc = Document.FromAttributeMap(attributeMap);
            return context.FromDocument<Item>(doc, new DynamoDBOperationConfig { OverrideTableName = Environment.GetEnvironmentVariable("Table") });
        }
    }
}

I'm writing one row per second to the triggering tableand I'm seeing a memory increase of about 1MB per 3 invocations leading to it sitting at 128MB for a while before logging this:

START RequestId: 3e5acba1-36fd-4a3a-bf84-659c5488437b Version: $LATEST
END RequestId: 3e5acba1-36fd-4a3a-bf84-659c5488437b
REPORT RequestId: 3e5acba1-36fd-4a3a-bf84-659c5488437b  Duration: 2293.32 ms    Billed Duration: 2300 ms Memory Size: 128 MB    Max Memory Used: 128 MB 
RequestId: 3e5acba1-36fd-4a3a-bf84-659c5488437b Process exited before completing request

I have tried and same thing happens with a 256MB configuration and the increase seems linear.

I have observed a similar thing with an API Gateway Lambda proxy which uses ApiGatewayProxyFunction entrypoint. In that case the memory increases linearly until it reaches the limit where it stays for a while before ASP.NET seems to want to clear up some memory: image Highlighting three observations: 1) high duration on invocation before and after garbage collection 2) Max memory Used dropped from 128MB to 68MB 3) Same log stream used, so lambda instance is not discarded

I feel that we haven't changed anything or dragged in any new packages. Could it be possible that the release of .NET Core 2.1.4 around the same time last week could have caused this change in behaviour?

genifycom commented 5 years ago

I have also noticed a repeatable pattern. When I perform a publish on a C# lambda and then run it, and look at its memory consumption, it has a value, say 74MB. When I simply republish it again, no change, no recompilation, it will then run at a lower memory consumption, say 66MB

I am also seeing projects that were fine are now exceeding the 128MB limit and the process terminating prematurely.

twopointzero commented 5 years ago

128MB can barely start the process let alone run it effectively. Having recently tested cold start performance for a relatively simple ASP.NET Core Lambda Proxy implementation, using every available memory limit, I can say with good confidence that the commenters before me are thrashing the GC throughout the request lifecycle and causing their own memory issues.

They will observe low single-digit-second cold starts and sub-to-low-millisecond response times if they increase their memory limit to a level, suitable for the memory needs of their application, that minimizes start and execute times without increasing costs per start/execute.

rajatshuvro commented 5 years ago

I have a lambda that has 3GB memory. But I also see the same issue. In the beginning, my process takes about 1.5G but gradually keeps increasing. Sometimes the same input consumes wildly different amounts of memory (leading to terminations). There is nothing random in my code that would cause the memory consumption to vary.

ljacobsson commented 5 years ago

@twopointzero 128mb was ample for the lambda in my example before upgrading it to .NET core 2.0.

I agree that 128mb isn't enough for ASP.NET Lambda Proxies, but this doesn't make use of Microsoft.AspNetCore.App

hunkeelin commented 5 years ago

This has something to do with the container being warm even after the function complete. Instead of return "",nil. just panic it and it would solve the problem. I know it's a hack but it is how it is atm.

NGL321 commented 3 years ago

Is anyone still experiencing this issue on current versions of lambda?

github-actions[bot] commented 3 years ago

This issue has not recieved a response in 2 weeks. If you want to keep this issue open, please just leave a comment below and auto-close will be canceled.

ozgurakcali commented 2 years ago

@NGL321 we are still experiencing the same issue. We are seeing this issue on a function we invoke every ~10 minutes. Looking like subsequent runs of the same code on same input causing higher memory usages, and in turn, when it exceeds function memory limit, task exits.

I've created the the lambda environment locally on docker using images hosted here: https://hub.docker.com/r/lambci/lambda/, and this problem does not happen there with the same input.

It looks like a memory leak at first, but its not actually a memory leak, I print process memory consumption at certain places of the function, and it always starts at the near-same low value, but after a number of invocations. starts to consume much higher memory as the function progresses.