szymonmaszke / torchlambda

Lightweight tool to deploy PyTorch models to AWS Lambda
MIT License
125 stars 3 forks source link

10gb lambda #14

Open Emveez opened 3 years ago

Emveez commented 3 years ago

So aws lambda now support up to 10gb and increases computational capacity in relation with the mem allocation. I was running inference with 3gb allocation and compared with 10gb but did not see any major improvements. Why could this be? Maybe the static compiled torch can not use all vcpu?

szymonmaszke commented 3 years ago

I think all the cores should be used out of the box even with static build. You may also try to change some PyTorch flags as described in documentation and torchlambda build.

You may see available flags here and specify them like this (for example: torchlambda build --pytorch USE_OPENMP=ON.

You may need to profile your application somehow and that might require manually changing C++ code, if you find something please let me know though.