AWS Lambda Power Tuning is an open-source tool that can help you visualize and fine-tune the memory/power configuration of Lambda functions. It runs in your own AWS account - powered by AWS Step Functions - and it supports three optimization strategies: cost, speed, and balanced.
I have a small python 3.7 lambda, that uses boto3 to query a list of IAM roles in an account. When I run this through aws-lambda-power-tuning I get more-or-less expected results - execution time dropping with memory allocation, and a slight improvement in cost at 256MB vs 128MB. The cost is something like 0.0000056, and it takes approx 2.5 seconds to run.
If I run the same code in python 3.11 on ARM, the cost is 3 orders of magnitude less, but the execution time is 50% longer. Given that ARM is only about 20% cheaper, but the execution time is 50% more, why is the price 1/1000th of the original function?
Is it a case where the numbers are so low, it's getting into the resolution of float math?
I have a small python 3.7 lambda, that uses boto3 to query a list of IAM roles in an account. When I run this through aws-lambda-power-tuning I get more-or-less expected results - execution time dropping with memory allocation, and a slight improvement in cost at 256MB vs 128MB. The cost is something like 0.0000056, and it takes approx 2.5 seconds to run.
If I run the same code in python 3.11 on ARM, the cost is 3 orders of magnitude less, but the execution time is 50% longer. Given that ARM is only about 20% cheaper, but the execution time is 50% more, why is the price 1/1000th of the original function?
Is it a case where the numbers are so low, it's getting into the resolution of float math?