Describe the feature you'd like
In cases where the input being inferenced is different for each of the hosted models, the input_fn() would need to branch based on the model that was invoked. Currently, input_fn() is only passed the input data and so branching based on model name in the input_fn() is not possible. Same for output_fn().
How would this feature be used? Please describe.
This feature would be used in multi-model endpoints where different kinds of models work on different kinds on inputs. These input would need some preprocessing based on the model that was invoked.
Describe alternatives you've considered
The only workaround is to pass the model name as a feature in the input data.
Describe the feature you'd like In cases where the input being inferenced is different for each of the hosted models, the input_fn() would need to branch based on the model that was invoked. Currently, input_fn() is only passed the input data and so branching based on model name in the input_fn() is not possible. Same for output_fn().
How would this feature be used? Please describe. This feature would be used in multi-model endpoints where different kinds of models work on different kinds on inputs. These input would need some preprocessing based on the model that was invoked.
Describe alternatives you've considered The only workaround is to pass the model name as a feature in the input data.
Additional context None.