Toolkit for allowing inference and serving with PyTorch on SageMaker. Dockerfiles used for building SageMaker Pytorch Containers are at https://github.com/aws/deep-learning-containers.
What did you find confusing? Please describe.
How do you specify batch size for MME models?
Describe how documentation can be improvedThis blog describes using env vars to set batch size and other parameters for a single-model endpoint, however, I haven't found any documentation on setting batch size for individual models within a MME.
Additional context
Each model in my MME has a MAR-INF/MANIFEST.json within its model.tar.gz, so I tried to specify batchSize in these files, but I don't think it's being applied.
What did you find confusing? Please describe. How do you specify batch size for MME models?
Describe how documentation can be improved This blog describes using env vars to set batch size and other parameters for a single-model endpoint, however, I haven't found any documentation on setting batch size for individual models within a MME.
Additional context Each model in my MME has a
MAR-INF/MANIFEST.json
within itsmodel.tar.gz
, so I tried to specifybatchSize
in these files, but I don't think it's being applied.