Closed Harathi123 closed 5 years ago
In general, I don't feel like bringing your own models to the batch transform mode is very well documented. I'd like to see a Tensorflow example using the script_mode python sdk (no docker included).
If you already have a model that works, you can simply use it in SageMaker Batch Transform by creating a transform job, passing in the input and output location in S3. When the job completes, it'll upload the inference results to the output location you specified.
SageMaker has a bunch of image-classification sample notebooks which could be helpful, e.g. https://github.com/awslabs/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/imageclassification_caltech/Image-classification-lst-format.ipynb
@ryan/@aloha Thanks for the suggestions. Sorry for the late intimation. I am able to create batch transform job successfully.
@Harathi123, If you can mention the steps to create a batch transform job that would be helpfull.
I not using AWS-Sagemakers inbuilt models. I have my own code as a Jupyter Notebook and trying to run that code for batch inferences.
In the AWS samples, I see a sample file for R deployment, nothing on how to deploy the Jupyter instance.
I believe need to create a docker image and then train and deploy.
Thanks in Advance.
Hi, I have few questions regarding Sagemaker Batch Transform Jobs. Currently I have a deployed model (for object detection in an image) and an endpoint with which i am able to invoke and get inferences. Now, I am getting inferences for each image at a time. I am wondering whether it is possible to get inferences for a batch of images at a time using Batch Transform job. If it is possible, can anyone please provide instructions of how to do this? I searched for some examples, but didnt find any relavant ones.
Thanks in advance! Harathi