Closed CyLouisKoo closed 6 years ago
It seems that you are requesting with the "input" data but it is not in the model signature.
For image inference, we recommend you to export the image model with "images" as inputs and the clients should request with the "images" data instead of "input".
How to do it specifically? Suppose I have the following ready-made model. How can I quickly call this model, is there a ready-made solution? Appreciate much for your answer
Thanks for your response. I have updated the deep_image_model with image base64 as input.
Now you can git pull
to get the latest pre-trained model and test with this commands.
simple_tensorflow_serving --model_base_path="./models/deep_image_platforms"
curl -X POST -F 'image=@./images/mew.jpg' -F "model_version=1" 127.0.0.1:8500
Or go to the dashboard in http://localhost:8500/ and upload the image to make inference.
Sorry for the late reply. Both of the above methods can be run through. Appreciate much and I will study your code framework again.
Great and thanks for reporting the issue.
I'm gonna close the issue and feel free to re-open if you have any other question.