It's running fine, and I can see requests come in and get processed, but then at some point, it crashes (I'm guessing around 5000 or so requests?). This is what the logs for the lambda look like
...
lambda_1 | REPORT RequestId: 1608e795-54dd-1076-5ec6-951729eb5d8a Duration: 7.17 ms Billed Duration: 100 ms Memory Size: 1536 MB Max Memory Used: 47 MB
lambda_1 | START RequestId: 1acd8f87-5293-1374-3f46-0d469ebd5e78 Version: $LATEST
lambda_1 | END RequestId: 1acd8f87-5293-1374-3f46-0d469ebd5e78
lambda_1 | REPORT RequestId: 1acd8f87-5293-1374-3f46-0d469ebd5e78 Duration: 9.02 ms Billed Duration: 100 ms Memory Size: 1536 MB Max Memory Used: 47 MB
lambda_1 | START RequestId: 4d4e9aae-d264-126c-5d1e-4cbc5c63b32c Version: $LATEST
lambda_1 | END RequestId: 4d4e9aae-d264-126c-5d1e-4cbc5c63b32c
lambda_1 | REPORT RequestId: 4d4e9aae-d264-126c-5d1e-4cbc5c63b32c Duration: 6.61 ms Billed Duration: 100 ms Memory Size: 1536 MB Max Memory Used: 47 MB
lambda_1 | START RequestId: 3e1c34f4-004f-16f7-6301-44c3aac7d164 Version: $LATEST
lambda_1 | Fatal Python error: deallocating None
lambda_1 | Python runtime state: initialized
lambda_1 |
lambda_1 | Current thread 0x00007f0ab78c9740 (most recent call first):
lambda_1 | File "/var/runtime/bootstrap.py", line 473 in main
lambda_1 | File "/var/runtime/bootstrap", line 12 in <module>
So feels like I'm running out of memory? The lambda itself shouldn't consume that much memory, and not sure how much concurrency I'm driving from the client... is there a memory leak somewhere? anything I can do to avoid it?
I did a bit more tracing and I suspect it was the JS client that was actually eating memory slowly. I'll close this as it doesn't seem to be an issue with docker-lambda. Sorry for the false alarm.
docker-lambda looks awesome. Thanks so much for creating it.
I only started using it, but bumped into an issue that I'm not entirely sure about.
I run it with docker-compose, with a redis "backend" and driving tests that make HTTP requests to the lambda.
docker-compose file
It's running fine, and I can see requests come in and get processed, but then at some point, it crashes (I'm guessing around 5000 or so requests?). This is what the logs for the lambda look like
So feels like I'm running out of memory? The lambda itself shouldn't consume that much memory, and not sure how much concurrency I'm driving from the client... is there a memory leak somewhere? anything I can do to avoid it?