Toolkit for allowing inference and serving with PyTorch on SageMaker. Dockerfiles used for building SageMaker Pytorch Containers are at https://github.com/aws/deep-learning-containers.
Describe the feature you'd like
A clear and concise description of the functionality you want.
add environment variable "OMP_NUM_THREADS" (default value :1) in CPU instances, and write this value into TorchServe config.properties.
How would this feature be used? Please describe.
A clear and concise description of the use case for this feature. Please provide an example, if possible.
Here is a related ticket in TorchServe side.
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Additional context
Add any other context or screenshots about the feature request here.
Describe the feature you'd like A clear and concise description of the functionality you want. add environment variable "OMP_NUM_THREADS" (default value :1) in CPU instances, and write this value into TorchServe config.properties.
How would this feature be used? Please describe. A clear and concise description of the use case for this feature. Please provide an example, if possible. Here is a related ticket in TorchServe side.
Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered.
Additional context Add any other context or screenshots about the feature request here.