openvinotoolkit / openvino

OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference
https://docs.openvino.ai
Apache License 2.0
7.3k stars 2.27k forks source link

Convert a model from IR format to .blob format to run on Azure Percept #8559

Closed tsuting closed 3 years ago

tsuting commented 3 years ago

Hello,

My goal is to convert a model from IR format to .blob format in order to run the model on Azure Percept. I tried with the following command. The model can be downloaded from here.

./opt/intel/openvino_2021/deployment_tools/tools/compile_tool 
    -m human-pose-estimation-0001.xml 
    -d MYRIAD 
    -ip U8 
    -VPU_NUMBER_OF_SHAVES 8 
    -VPU_NUMBER_OF_CMX_SLICES 8 
    -op FP32

There is no error but only warning messages. E: [xLinkUsb] [ 88320] [myriad_compile] usb_find_device_with_bcd:266 Library has not been initialized when loaded

However, it didn't work on Azure Percept. I'm trying to figure out if the problem is the converted model (.blob) or the code on Azure Percept. Therefore, could you help me answer the following questions?

  1. Does the above command look okay?
  2. Does the warning message matter?
  3. Do I need to set up a configuration file? If yes, could you guide me how to do? I found -c <value> Optional. Path to the configuration file. on this compile_tool page. But I'm not sure how to write this.
  4. Which one should I use, compile_tool or myriad_compile? as on the percept github page, it's using myriad_compile to compile from IR to .blob not compile_tool.

Thank you.

Iffa-Intel commented 3 years ago

Hi,

The VPU/NCS2 only supports FP16 precision. Therefore, you'll need to use a model that's in FP16 format. This might help you to understand how to choose the right precision. This is probably the main reason why your blob file didn't work.

For the input layer precision (-ip) you can use the U8 (note that it will be converted internally to FP16). However, the output layer is expected to be FP16 in this case because usually, the output layer is the input layer of a new layer (in the creation of the neural network). Without the definition of -op the program would automatically use FP16 as network output. The -op parameter is optional anyway.

I manage to get the blob file with the human-pose-estimation-0001 in FP16 format. Probably you could try this instead and use it to run on your Azure Percept.

blob

tsuting commented 3 years ago

Hello,

Thanks for your reply and suggestion! I found that I made a mistake on the zipped file and that's why the Percept cannot read the model. I used converted FP32 model and it works. I will check out the VPU supports as you mentioned. Thank you!