aws / sagemaker-inference-toolkit

Serve machine learning models within a 🐳 Docker container using 🧠 Amazon SageMaker.
Apache License 2.0
372 stars 82 forks source link

In a multi-model scenario pass model name as argument to input_fn() and output_fn() #77

Open RajeshRamchander opened 3 years ago

RajeshRamchander commented 3 years ago

Describe the feature you'd like In cases where the input being inferenced is different for each of the hosted models, the input_fn() would need to branch based on the model that was invoked. Currently, input_fn() is only passed the input data and so branching based on model name in the input_fn() is not possible. Same for output_fn().

How would this feature be used? Please describe. This feature would be used in multi-model endpoints where different kinds of models work on different kinds on inputs. These input would need some preprocessing based on the model that was invoked.

Describe alternatives you've considered The only workaround is to pass the model name as a feature in the input data.

Additional context None.