aws / sagemaker-pytorch-inference-toolkit

Toolkit for allowing inference and serving with PyTorch on SageMaker. Dockerfiles used for building SageMaker Pytorch Containers are at https://github.com/aws/deep-learning-containers.
Apache License 2.0
134 stars 70 forks source link

Add UseContainerSupport flag for model-server to see all available CPUs #98

Open vdantu opened 3 years ago

vdantu commented 3 years ago

Issue #, if available:

Description of changes: The PR add a VM_ARG -XX:-UseContainerSupport which TS requires in order to see all the available CPUs when running in a container.

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

sagemaker-bot commented 3 years ago

AWS CodeBuild CI Report

Powered by github-codebuild-logs, available on the AWS Serverless Application Repository