Open HeChengHui opened 2 years ago
Hey @HeChengHui, any progress with loading ONNX model? @KaiyangZhou Would you give some advice about loading a ONNX model to speed up the process?
Thanks!
please have a look at https://github.com/KaiyangZhou/deep-person-reid/issues?q=onnx and see if you can find anything useful
I'll try to find some time to write a tutorial code since this issue has been asked many times
@KaiyangZhou Thanks, looking forward for the tutorial!
too busy, sorry, don't count on me (my bad)
does this help https://pytorch.org/docs/stable/onnx.html#example-alexnet-from-pytorch-to-onnx?
@KaiyangZhou Thanks for updating me, no worries :) I think Im able to convert the weight file to onnx without issue. Im confused where do I need to load the converted weight file. Would you please mark the palce I need to do it?
Thanks alot!
Im confused where do I need to load the converted weight file. Would you please mark the palce I need to do it?
First build the model with model = torchreid.models.build_model()
. Then load the pretrained weights with torchreid.utils.load_pretrained_weights(model, weight_path)
. Please refer to the documentation for more https://kaiyangzhou.github.io/deep-person-reid/user_guide#fine-tune-a-model-pre-trained-on-reid-datasets. (I also just checked the Docs as my memory is a bit rusty)
Good news @KaiyangZhou, @Rm1n90, @HeChengHui!
I have a working multibackend (ONNX, OpenVINO and TFLite) class for for the ReID models that I manged to export (mobilenet
, resnet50
and osnet
models) with my export script. My export pipeline is as follows: PT --> ONNX --> OpenVINO --> TFLite. osnet
models fails in the OpenVINO export; mobilenet
and resnet50
models go all the way through. Feel free to experiment with it, it is in working condition as shown by my CI pipeline. Don't forget to drop a PR if you have any improvements! :smile:
@mikel-brostrom Thats great! I will work on TensorRT export and will submit a PR! just a question, Did you time the model in ONNX, OPENVINO and TFLITE to see how long will take the tracking to do the job compare to pytorch version?
Did you time the model in ONNX, OPENVINO and TFLITE to see how long will take the tracking to do the job compare to pytorch version?
Inference time for the different frameworks is highly dependent on which HW you run it on. The chosen export frameworks should be deployment-plaform specific.
I managed to export some models from the model zoo into ONNX format. However, I have difficulties getting it to work with torchreid. In
torchtools.py
, instead oftorch.load()
, I addedcheckpoint = onnx.load(fpath)
. This resulted in the following error:Any advice?