Open iluoyi opened 5 years ago
sorry for the delayed response @iluoyi!
We don't natively support multiple inputs in our default hosting functions, but you can override the defaults by providing your own transform_fn
in your entry point code.
Here is an example of where we've overridden the default transform_fn
: https://github.com/awslabs/amazon-sagemaker-examples/blob/master/sagemaker-python-sdk/mxnet_gluon_mnist/mnist.py#L166-L183. (Unfortunately, I don't think we have an example of a model accepting multiple inputs yet.)
same question here. I trained my own model which requires two inputs and deployed it as endpoint. But when I called predict(), the error was "Expects arg[1] to be int32 but float is provided".
@ZHAO0189 sorry for the slow response. that sounds like it's likely an issue with your serving entry point code. can you share your inference script?
https://github.com/aws/sagemaker-mxnet-container/blob/ee9098c8c2de6a635dcd9f4b0819dc5340061cde/src/sagemaker_mxnet_container/serving.py#L228
If my mxnet model takes 2 data inputs (both are float arrays), how should I make it work at this point?
I defined my model-shapes.json like [{"shape": [1, 12], "name": "data0"}, {"shape": [1, 12], "name": "data1"}]
And one example input looks like: data = {'data0':[1.0, 2904.0, 1452.0, 464.0, 3022.0, 2948.0, 2548.0, 2.0, 0.0, 0.0, 0.0, 0.0], 'data1':[1.0, 2204.0, 1552.0, 494.0, 3032.0, 298.0, 2568.0, 2.0, 0.0, 0.0, 0.0, 0.0]}
But I got errors on the server side: