Closed rmontroy closed 3 years ago
Here's a reference to how "AWS Lambda appears to treat instance placement as a bin-packing problem, and tries to place a new function instance on an existing active VM to maximize VM memory utilization rates." Lambda uses Linux cgroups to limit CPU, memory, disk I/O, and network, so the memory size parameter actually determines the worst-case allocation of all of these resources. This makes it difficult to get a feel for typical performance of Lambda functions without extensive benchmarking.
Configuration | Python 3.8 | Pillow-SIMD | OpenSlide | Lambda | EFS | Rank (1=fastest) |
---|---|---|---|---|---|---|
Lambda+Python+SVS images on EFS | x | x | x | x | x | 10 |
Lambda+Python+S3 DeepZoom pyramid files | x | x | 2 | |||
Lambda+Python+DeepZoom pyramid files on EFS | x | x | x | 1 | ||
EC2+Python+SVS images on EFS | x | x | x | x | 10 | |
EC2+Python+DeepZoom pyramid files on EFS | x | x | 1.5 | |||
EC2+Python+SVS images on EBS | x | x | x | 10 | ||
EC2+IIPServer+SVS images on EFS | x | x | 5 |
I'm pretty sure that the tile cache improves tile load latency much more than maxing out the Lambda memory setting, since the former doesn't involve CPU-intensive JPEG decoding/encoding, but it'd be nice to quantify the difference. It'd also be nice to know the difference in service costs.