Open jxhno1 opened 2 years ago
Sorry but I can't understand what you're trying to tell. I'm not an English speaker, so I only can understand clear and simple description. Could you please rephrase your question so that non-English speakers can easily understand it?
yolov4-tinyモデル変換してunityで認識できるようにしてくれないか?詳しい手順を教えてくれないか?特に、https://colab.research.google.com/drive/1yjsq0ijvkimrc5-i4qxawj43-nbpqkos?usp=sharing#scrollto=9mphbdcyxnghchecker.check_model(model)というapi
Unfortunately, auto translation isn't very helpful. Could you please provide simple, clear and descriptive information about the problem?
Thanks for your reply, we follow the steps you said in "About the ONNX file", we used the code in https://github.com/PINTO0309/PINTO_model_zoo/tree/main/046_yolov4-tiny/01_float32 and then used the code in https://colab.research.google.com/drive/1YjSQ0IJvKimrc5-I4QXaWJ43-nbPqKOS?usp=sharing#scrollTo=9MpHbdcYxnGH but the calling at checker.check_model(model) error in "not recongnize the attribute split" how to solve this problem, thank you for your suggestions!
I mean if you have your own steps tested or the code used in links above, may you finger out or give me some simple description guide about your steps in details, thks!
I didn't use his code but only used download.sh
to download the ONNX file.
https://github.com/PINTO0309/PINTO_model_zoo/blob/main/046_yolov4-tiny/01_float32/download.sh
Then, I reconverted the ONNX file using the Colab notebook.
Thank you for your reply, we will try our custom steps then.
we took some efforts to use our custom data to train a custom model using https://github.com/bubbliiiing/yolov4-tiny-keras, and convert it to a onnx file, then reconverted the ONNX file using the Colab notebook.But error when imported in unity.
I have no idea as I have never trained a custom Yolov4 model.
we took some efforts to use our custom data to train a custom model using https://github.com/bubbliiiing/yolov4-tiny-keras, and convert it to a onnx file, then reconverted the ONNX file using the Colab notebook.But error when imported in unity.
Hello, I would like to ask if the issue you raised has been resolved. I'm facing the same problem as you right now.
I have successfully converted a self-made model created using the yolov4tiny (https://github.com/bubbliiiing/yolov4-tiny-keras) and executed it on Unity.
When I check the information of the onnx model in this repository with netron, it looks like this:
format: ONNX v4 producer: tf2onnx 1.8.5 imports: ai.onnx v9 description: converted from saved_model
Therefore, it is necessary to match this version when converting your own model to .onnx. Note that the conversion is from saved_model, not from .h5. (It is expected that you converted .h5 to saved_model and then converted to .onnx)
I used the following to convert .h5 to saved_model. https://github.com/PINTO0309/PINTO_model_zoo/blob/main/046_yolov4-tiny/01_float32/06_keras_to_saved_model.py I also used tf2onnx to convert from saved_model to .onnx. https://github.com/onnx/tensorflow-onnx Then use the code in this repository to add the attribute to the split layer. https://colab.research.google.com/drive/1YjSQ0IJvKimrc5-I4QXaWJ43-nbPqKOS?usp=sharing Keijiro explains the split layer problem in this video. https://youtu.be/dMgm4ZYfaUI
By doing this, you should not get an error when importing to unity.
When running inference in Unity, you need to specify your own model to load from the resource set displayed in the inspector. Also change the layer name in lines 131 and 132 of ObjectDetector.cs as below
"Identity" → "conv2d_18" "Identity_1" → "conv2d_21"
I hope this helps.
Thank you for organizing it nicely. It was very helpful. However, I have a total of 3 questions.
Question 1) [s4k10503] : I used the following to convert .h5 to saved_model. https://github.com/PINTO0309/PINTO_model_zoo/blob/main/046_yolov4-tiny/01_float32/06_keras_to_saved_model.py => What is the correct way to create the 'yolov4_tiny_voc.json' file? I found the following method but I'm not sure if it's correct.
---------------code 1------------------------- from nets.yolo import yolo_body from utils.utils import net_flops
if name == "main": input_shape = [416, 416, 3] anchors_mask = [[3, 4, 5], [1, 2, 3]] num_classes = 80 phi = 0 model = yolo_body([input_shape[0], input_shape[1], 3], anchors_mask, num_classes, phi=phi) model.summary() json_string = model.to_json() open('yolov4_tiny_voc.json', 'w').write(json_string) -----------------code 1----------------------
Question 2) There is a problem creating the save_model. In my case, loading the weights of the trained model works fine with yolov4_tiny_weights_coco.h5, but there is an ValueError with loading different weights such as yolov4_tiny_weights_voc.h5 or custom weights.
ValueError: Shapes (1, 1, 512, 255) and (75, 512, 1, 1) are incompatible
---------------code2------------------------- import tensorflow as tf model = tf.keras.models.model_from_json(open('yolov4_tiny_voc.json').read(), custom_objects={'tf': tf}) model.load_weights('..\yolov4-tiny-keras-master\model_data\yolov4_tiny_weights_voc.h5') model.save('saved_model') ----------------code2------------------------
Question 3) I am encountering continuous errors when converting saved_model to onnx using tf2onnx. I have tried various methods, but cannot resolve the issue. The final error that occurs is as follows.
RuntimeError: MetaGraphDef associated with tags 'serve' could not be found in SavedModel. To inspect available tag-sets in the SavedModel, please use the SavedModel CLI: saved_model_cli
-----------code3------------------------- !python -m tf2onnx.convert --saved-model folder --output model.onnx --opset 9 --verbose -----------code3--------------------------
If you could share specific solutions or code, I would be very grateful. My ultimate goal is to implement object detection in Unity by training with custom data.
My explanation was insufficient
I made some changes to yolo.py at https://github.com/bubbliiiing/yolov4-tiny-keras. You need to edit lines 25 and 26 to your custom modelpath and classpath as below
"model_path" : 'model_data/yolov4_tiny_custom_weights.h5',
"classes_path" : 'model_data/yolov4_tiny_custom_classes.txt',
In addition, the following description was added to the generate method.
self.yolo_model.summary()
json_string = self.yolo_model.to_json()
open('yolov4_tiny_custom.json', 'w').write(json_string)
and run predict.py.
This may not be smart as you have to run predict.py just to generate the json file. I expect to be able to write code that does only the relevant parts, provided you not mistaken about what is required to generate the desired json file.
This seems to happen when the modelpath and classpath combination is incorrect
I'll check later. By the way, it looks like you didn't create the correct savedmodel as of question 2. How did you prepare this savedmodel? Also, what is your tf2onnx version?
Thank you very much for your kind response, s4k10503. I am glad that I was able to solve the issue due to your detailed information. I had some other problems related to TensorFlow versions, but now I resolve them.
Thank you for taking the time to give me such detailed information. Have a great day!
For those who may be reading this, I hope that you build a good virtual environment and switch between TensorFlow 1.15.2 and 2.3.0 well. Especially, TensorFlow 2.3.0 is required to create the [saved_model.pb]
you may translate onnx in pytorch lib, and will be ok soon.
Follow your model to onnx step, we encounter some problems, onnx file not recognized in unity, how to generate the NN model type? Here is a important thing that in https://colab.research.google.com/drive/1YjSQ0IJvKimrc5-I4QXaWJ43-nbPqKOS?usp=sharing#scrollTo=9MpHbdcYxnGH checker.check_model(model) always error in not recognise the attribute split, do you have some suggestions?thks again!