I plan to use a custom trained model in a local environment without network access.
What's the best way to inference saved model -via model = torch.hub.load(...) or model = attempt_load('25ep_best.pt', map_location='cuda:0') as in detect.py?
I've made a comparison on the same image and I see that prediction on detect.py is faster than with torch.hub is this correct and we should use for inference attempt_load ?
torch.hub:
image 1/1: 330x600 1 apple
Speed: 5.6ms pre-process, 16.9ms inference, 1.3ms NMS per image at shape (1, 3, 352, 640)
detect.py:
1 apple, Done. (9.8ms) Inference, (1.7ms) NMS
attemp_load fuses convolution and batch normalization layers to accelerate inference (Refer https://www.cvmart.net/community/detail/2032). So yes, you should use attemp_load method for inference.
I plan to use a custom trained model in a local environment without network access. What's the best way to inference saved model -via
model = torch.hub.load(...)
ormodel = attempt_load('25ep_best.pt', map_location='cuda:0')
as in detect.py?I've made a comparison on the same image and I see that prediction on detect.py is faster than with torch.hub is this correct and we should use for inference
attempt_load
?