Open mattpopovich opened 2 years ago
Hi @mattpopovich ,
But I'm not sure how to use the output model?
It's a little tricky here. We call the translated checkpoint weight at below, and the model_urls[weights_name]
is translated from yolov5 with this CLI tool. Seems that we have to manually edit the codes here, check more details about the limitation of this interface https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/#limitations-of-the-current-api.
https://github.com/zhiqwang/yolov5-rt-stack/blob/3485ea144fec7b2857d0d2e0d4ff329959e77027/yolort/models/yolo.py#L263-L267
If you can elaborate on how, I'd be happy to make a PR with added documentation.
Sure! The torchvision team introduce a new API to call custom multi-weight at https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/#multi-weight-support, I think this interface is just what we need here, and we could follow their strategy.
I'm a little hesitant here whether we adopt this brand new interface or make a backward compatible interface like torchvision.
Because we do not yet support training, I currently judge that most people use the classmethods YOLO.load_from_yolov5()
or YOLOv5.load_from_yolov5()
to load custom checkpoints, and we will remain this classmethod. So I'm inclined to go the route of completely adopting the new interface from torchvision.
Let me know if you have more concerns about this.
🐛 Describe the bug
Before I closed #273, I wanted to make a PR to add some documentation of how to go from Ultralytics weights --> yolort weights --> LibTorch C++ inference (how to run
deployment/libtorch/main.cpp
with Ultralytics weights). I was going to reference in the documentation to use the CLI tool as you mentioned for the weights conversion, but I'm not sure how to use that script properly?It seems to run just fine:
But I'm not sure how to use the output model? If you can elaborate on how, I'd be happy to make a PR with added documentation.
I tried to use it directly in
deployment/libtorch/main.cpp
but that gave the same error as #142:I tried to load it in python, but with no luck:
You also mentioned I might be able to convert the model weights if "I load the translated checkpoints in
yolort.models.yolov5s()
". I'm not seeing any argument that would allow me to load a checkpoint usingyolort.models.yolov5s()
:Let me know what I'm doing wrong - thank you!
Versions
Click to display Versions
```console # python3 -m torch.utils.collect_env Collecting environment information... PyTorch version: 1.9.0a0+gitd69c22d Is debug build: False CUDA used to build PyTorch: 11.2 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.2 LTS (x86_64) GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0 Clang version: Could not collect CMake version: version 3.21.1 Libc version: glibc-2.31 Python version: 3.8 (64-bit runtime) Python platform: Linux-5.4.0-92-generic-x86_64-with-glibc2.29 Is CUDA available: True CUDA runtime version: 11.2.152 GPU models and configuration: GPU 0: GeForce GTX 1080 GPU 1: GeForce GTX 1080 GPU 2: GeForce GTX 1080 Nvidia driver version: 460.91.03 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.1.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.1.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.1.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.1.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.1.0 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.1.0 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.1.0 HIP runtime version: N/A MIOpen runtime version: N/A Versions of relevant libraries: [pip3] numpy==1.21.4 [pip3] pytorch-lightning==1.5.8 [pip3] torch==1.9.0a0+gitd69c22d [pip3] torchmetrics==0.6.2 [pip3] torchvision==0.10.0a0+300a8a4 [conda] Could not collect ```