Closed nikhil-sk closed 2 years ago
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository
Issue #, if available: NA
Description of changes:
Add feature to specify the following properties for a model that help in a batch inference:
The above properties for the model have been exposed as the following environment variables in the toolkit:
These properties need to be supplied in a dictionary form to the config option 'env' when configuring a model using the sagemaker python sdk
.Note:
These properties only apply in a single model inference on SageMaker. For multi-model endpoint, a user still needs to bake-in the config.properties file, and list the models in the config file.Logs
When run in SageMaker, the model config is correctly picked up from the environment when specified as follows:
Input
Output:
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.