Closed errx closed 1 year ago
diff --git a/yolort/v5/models/yolo.py b/yolort/v5/models/yolo.py
index 38ae7a3..ae75205 100644
--- a/yolort/v5/models/yolo.py
+++ b/yolort/v5/models/yolo.py
@@ -38,7 +38,7 @@ from .experimental import CrossConv, MixConv2d
if is_module_available("thop"):
import thop # for FLOPs computation
-__all__ = ["Model", "Detect"]
+__all__ = ["Model", "Detect", "DetectionModel"]
LOGGER = logging.getLogger(__name__)
@@ -336,3 +336,5 @@ def parse_model(d, ch): # model_dict, input_channels(3)
ch = []
ch.append(c2)
return nn.Sequential(*layers), sorted(save)
+
+DetectionModel = Model
this hack seems to be working
Thanks for reporting this bug to us @errx , and seems that your strategy is correct, we do not support YOLOv5's v6.2 or master branch at this time. And it would be great if you could make a Pull Request for this change to us.
🐛 Describe the bug
I've trained yolov5n model with latest code from https://github.com/ultralytics/yolov5.
However, when I try to load it with yolov5-rt I get the following error:
I guess it's related to https://github.com/ultralytics/yolov5/issues/9151
But I'm not sure what should I do. Thank you.
Versions
PyTorch version: 1.13.0+cu117 Is debug build: False CUDA used to build PyTorch: 11.7 ROCM used to build PyTorch: N/A
OS: Arch Linux (x86_64) GCC version: (GCC) 12.2.0 Clang version: 14.0.6 CMake version: version 3.24.3 Libc version: glibc-2.36
Python version: 3.10.8 (main, Nov 1 2022, 14:18:21) [GCC 12.2.0] (64-bit runtime) Python platform: Linux-6.0.8-arch1-1-x86_64-with-glibc2.36 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1080 Nvidia driver version: 520.56.06 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True
Versions of relevant libraries: [pip3] numpy==1.23.4 [pip3] torch==1.13.0 [pip3] torchvision==0.14.0 [conda] Could not collect