aws / sagemaker-inference-toolkit

Serve machine learning models within a 🐳 Docker container using 🧠 Amazon SageMaker.
Apache License 2.0
370 stars 82 forks source link

fix: transform function to support proper batch inference #125

Open taepd opened 1 year ago

taepd commented 1 year ago

Issue #, if available: This PR is related to #108 and #123.

Description of changes: As mentioned in #123, the batch inference provided by torchserve literally provides batch inference in a 'batch' format. However, the batch inference implementation in #108 simply runs a single inference through a loop. This is not a correct implementation of batch inference. TorchServe's documentation on batch inference, shows an example where the developer handles this logic and feeds the entire input batch to the model.

If I understand correctly, keeping the batch inference implementation in its current state would be deceptive to the user.

To make batch inference work correctly, we've modified it so that a list of requests can be sent to _transform_fn() in list format.

However, the current implementation requires modifications to related functions such as default_input_fn() and associated documentation, examples, etc. As far as I know, there is no better alternative, so it would be good to review and discuss this PR before proceeding with modifications to other functions.

Testing done: yes

Merge Checklist

Put an x in the boxes that apply. You can also fill these out after creating the PR. If you're unsure about any of them, don't hesitate to ask. We're here to help! This is simply a reminder of what we are going to look for before merging your pull request.

General

Tests

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

chen3933 commented 11 months ago

@nskool Is code change going to impact customer who use transformer and implemented their own input_fn, predict_fn or output_fn which can not handle len(data) > 1?

sagemaker-bot commented 11 months ago

AWS CodeBuild CI Report

Powered by github-codebuild-logs, available on the AWS Serverless Application Repository

nskool commented 11 months ago

@chen3933 It does not seem common to implement a predict_fn, input_fn, output_fn that handles only len(data)==1, but if customer has implemented to process only 1 request e.g., any assert check to test the length of the input, then it seems that the customer will have to change the logic.

We can go ahead and document this behavior change, if we merge this PR in.

While the predict_fn is mandatory for customer to provide (https://github.com/aws/sagemaker-inference-toolkit/blob/master/src/sagemaker_inference/default_inference_handler.py#L71), the input_fn, and output_fn are not. So it maybe easy for customer to change predict_fn. However, tt should be tested further if the default input_fn/output_fn functions can process batch input, specifically encode/decode method here - https://github.com/aws/sagemaker-inference-toolkit/blob/master/src/sagemaker_inference/default_inference_handler.py#L71 and https://github.com/aws/sagemaker-inference-toolkit/blob/master/src/sagemaker_inference/encoder.py#L93.

If the default_input_fn and default_output_fn (https://github.com/aws/sagemaker-inference-toolkit/blob/master/src/sagemaker_inference/default_inference_handler.py) cannot handle batch_size > 1, then this will break lot of scenarios.

@taepd Thank you for creating this PR, can you confirm if you tested the default input fn/output fn with batch_size > 1?