WongKinYiu / yolov7

Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors
GNU General Public License v3.0
13.41k stars 4.22k forks source link

What's the recommended way of local custom model inference - torch.hub.load or models.experimental.attempt_load? #975

Open OleksiiYeromenko opened 2 years ago

OleksiiYeromenko commented 2 years ago

I plan to use a custom trained model in a local environment without network access. What's the best way to inference saved model -via
model = torch.hub.load(...) or
model = attempt_load('25ep_best.pt', map_location='cuda:0') as in detect.py?

I've made a comparison on the same image and I see that prediction on detect.py is faster than with torch.hub is this correct and we should use for inference attempt_load ?

torch.hub:  
image 1/1: 330x600 1 apple
Speed: 5.6ms pre-process, 16.9ms inference, 1.3ms NMS per image at shape (1, 3, 352, 640)  

detect.py:
1 apple, Done. (9.8ms) Inference, (1.7ms) NMS
AkashDataScience commented 1 year ago

attemp_load fuses convolution and batch normalization layers to accelerate inference (Refer https://www.cvmart.net/community/detail/2032). So yes, you should use attemp_load method for inference.