Open Aditya-Scalers opened 1 year ago
Hi, you are getting error that OVMS expects more inputs than you provide. I assume that you use some kind of wrapper for OV model that encapsulates the fact that OV uses much more inputs than one to perform inference.
We have plans to support adding python code execution support inside OVMS so that could ease the integration in cases when you have existing python wrapping.
One thing I noticed as well is that you tried to use binary audi file - right now OVMS only supports images with binary inputs.
With the reference of whisper implementation with openvino for subtitle generation, I was able to create the whisper_encoder and whisper_decoder xml and bin files. Using whisper_encoder and whisper_decoder as seperate models with ovms i was able to start the docker container.
New status: ( "state": "AVAILABLE", "error_code": "OK" ) [2023-09-26 13:14:16.673][1][serving][info][model.cpp:88] Updating default version for model: whisper, from: 0 [2023-09-26 13:14:16.673][1][serving][info][model.cpp:98] Updated default version for model: whisper, to: 1 [2023-09-26 13:14:16.673][66][modelmanager][info][modelmanager.cpp:1069] Started model manager thread [2023-09-26 13:14:16.673][1][serving][info][servablemanagermodule.cpp:45] ServableManagerModule started [2023-09-26 13:14:16.673][67][modelmanager][info][modelmanager.cpp:1088] Started cleaner thread
I am not able to perform inference on these models. Any help would be appreciated.
client code:
When i request from the client using binary input of audio file i am getting this error.