Open nikolayNikit opened 2 years ago
You should follow this tutorial for Keras export. Basically you need to transform your model into SavedModel format, which you should be able to successfully compile.
In case shapes are causing issues, you can try providing --input_shapes
flag to model optimizer (see here)
Hello! Thanks for the answer. I have successfully saved my model with SavedModel and got the model folder (.pb and 2 cat-s). But when I try to convert my model at http://blobconverter.luxonis.com/ to a blob, I get an error that I only need to load the model's inference graph.
If I understand correctly, I need to export inference graph from my original h5 model(or from saved_model fromat). This is right? Could you please tell me if I need to provide the following options when exporting the output graph? --input_type=image_tensor \ --pipeline_config_path={pipeline_fname} \ --output_directory={output_directory} \ --trained_checkpoint_prefix={last_model_path}
This params from you Colab examle: https://colab.research.google.com/github/luxonis/depthai-ml-training/blob/master/colab-notebooks/Easy_Object_Detection_With_Custom_Data_Demo_Training.ipynb
Do you have examples of using tf2 for OAK-D-Lite?
Hey, yes, I am not sure that we support upload of folders/ZIPs to blobconverter, while TF2 requires a path to the saved_model_dir.
If your Keras model is some normal model, then I don't think you should specify any configs, but I would suggest you to locally install opevino==2022.1
from PIP, and use mo --saved_model_dir path/to/saved_model_directory
to generate xml and bin, and then upload those two to blobconverter. Depending on the normalization/image processing you do in your training code, you might have to specify flags like --mean_values
, --scale
, or --reverse_input_channel
when calling mo
. (See here)[https://docs.openvino.ai/latest/openvino_docs_MO_DG_Additional_Optimization_Use_Cases.html]. Note that default mode of operation on OAK is BGR UINT8, and that preprocessing directly on OAK is not supported, hence why you might need to specify the mentioned flags.
Hi, I was able to get both the .xml and .bin files from the open model zoo : in the openvino models directory. I sent them to the online blob converter and it failed with the following error:
Model file /tmp/blobconverter/715fa5c2879b4f4f8dcff70ecdf8ae64/v3-small_224_1/FP16/v3-small_224_1.xml cannot be opened!
Help will be greatly appreciated. These are the vanilla files from the openvino website, i did not do custom build. I know the .vino formats work as I was able to test them out with the python scripts.
Thank you
Hey @Whitchurch , 1/ Can you ensure that the XML is a correct file (you can open it on your computer)? 2/ It's generated with OpenVINO 2022.1 or less. 3/ When uploading to blobconverter there isn't anything blocking the upload (such as VPN, ...)?
@tersekmatija Thank you for reaching back: I will verify steps 1-3 and let you know as soon as I can, which will be around 2 weeks from today. [I am currently held up with a robotics project due next week. That being said, the Oak D is currently being used as a regular camera in the project for now.] Will get back to you as soon as I post next week. Thank you.
Hi I am sad to report that, oak-D is now disconnecting when i try to run depthai_demo.py. I changed my virtual machines USB from : 3.1 to 2.0. I tried to force USB2 as given in the instructions. https://docs.luxonis.com/en/latest/pages/troubleshooting/#forcing-usb2-communication
But to no avail. I think now I have to stream from my phone camera to my ubuntu system, to feed input for my robot. We can work on this connectivity issue also... if everything fails, i will return the Oak-D-lite back . Thanks.
CC @Erol444 on the above issue.
System information (version) OpenVINO=> 2021.4 Operating System / Platform => windows 10 64 Bit Problem classification => blob conversion Model type => tensorflow2 model
device =>OAK-D-Lite
Detailed description I have a model which was developed by tensorflow2(Keras). This model solve standart classification problem (detecting objects online and determining their types, now without segmentation and bounding). According to the documentation, all neurolevels of the model are compatible with depthAi. My model is saved in hdm5 format. My goal - run my model on my camera (OAK-D-Lite)
I have tried several options:
All my attempts finished with one error: Cannot get length of dynamic dimension. I don't quite understand what that means. Could u help me please, what is the possible reason for this error?
Also I tried using your example - https://colab.research.google.com/github/luxonis/depthai-ml-training/blob/master/colab-notebooks/Easy_Object_Detection_With_Custom_Data_Demo_Training.ipynb In this example, the tensorflow1 model is successfully converted to .blob, but in my case, where tf2 is used, I got an error.