Open guotao opened 8 months ago
Can you send the test images and the model so I can test to find the issue and try to fix it ?
Can you send the test images and the model so I can test to find the issue and try to fix it ?
Of course https://drive.google.com/file/d/1gp6wucf_sNBLBEQu2lI4oxXr9MKWt5gw/view?usp=drive_link this is the shared link
Also invated you as collaborator to team, incase this link not work 😁
Sorry, the file you have requested does not exist.
Make sure that you have the correct URL and the file exists.
For the drive,
And GitHub shows
I create a repo which upload the model files, and invate you to join
is this a yolov8 or yolov5?
i found out its yolov5, in detect.py did you use the same model torch script like the one in the app?
i found out its yolov5, in detect.py did you use the same model torch script like the one in the app?
yes, exactly, you can run detect.py --weight 'path to b2.torchscript' --source
to see the result, and compare with the one in example,thanks。
running !python detect.py --weight '../b2.torchscript' --source ../test.PNG
the model doest even run and outputs
detect: weights=['../b2.torchscript'], source=../test.PNG, data=data/coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_csv=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1
YOLOv5 🚀 v7.0-230-g53efd07 Python-3.10.12 torch-2.1.0+cu118 CUDA:0 (Tesla T4, 15102MiB)
Loading ../b2.torchscript for TorchScript inference...
Traceback (most recent call last):
File "/content/yolov5/detect.py", line 285, in <module>
main(opt)
File "/content/yolov5/detect.py", line 280, in main
run(**vars(opt))
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/content/yolov5/detect.py", line 101, in run
model = DetectMultiBackend(weights, device=device, dnn=dnn, data=data, fp16=half)
File "/content/yolov5/models/common.py", line 364, in __init__
model = torch.jit.load(w, _extra_files=extra_files, map_location=device)
File "/usr/local/lib/python3.10/dist-packages/torch/jit/_serialization.py", line 162, in load
cpp_module = torch._C.import_ir_module(cu, str(f), map_location, _extra_files, _restore_shapes) # type: ignore[call-arg]
RuntimeError: xnnpack::convolution not available! Reason: The provided (weight, bias, padding, stride, dilation, groups, transposed, output_min, output_max) parameters are either invalid individually or their combination is not supported by XNNPACK.
RuntimeError: xnnpack::convolution not available! Reason: The provided (weight, bias, padding, stride, dilation, groups, transposed, output_min, output_max) parameters are either invalid individually or their combination is not supported by XNNPACK
Saw this issue https://github.com/ultralytics/ultralytics/issues/2465, which might provide infomation helped @guotao
running
!python detect.py --weight '../b2.torchscript' --source ../test.PNG
the model doest even run and outputs
detect: weights=['../b2.torchscript'], source=../test.PNG, data=data/coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_csv=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1 YOLOv5 🚀 v7.0-230-g53efd07 Python-3.10.12 torch-2.1.0+cu118 CUDA:0 (Tesla T4, 15102MiB) Loading ../b2.torchscript for TorchScript inference... Traceback (most recent call last): File "/content/yolov5/detect.py", line 285, in <module> main(opt) File "/content/yolov5/detect.py", line 280, in main run(**vars(opt)) File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/content/yolov5/detect.py", line 101, in run model = DetectMultiBackend(weights, device=device, dnn=dnn, data=data, fp16=half) File "/content/yolov5/models/common.py", line 364, in __init__ model = torch.jit.load(w, _extra_files=extra_files, map_location=device) File "/usr/local/lib/python3.10/dist-packages/torch/jit/_serialization.py", line 162, in load cpp_module = torch._C.import_ir_module(cu, str(f), map_location, _extra_files, _restore_shapes) # type: ignore[call-arg] RuntimeError: xnnpack::convolution not available! Reason: The provided (weight, bias, padding, stride, dilation, groups, transposed, output_min, output_max) parameters are either invalid individually or their combination is not supported by XNNPACK.
Thanks for repling, I found this may help ,would you have a try?
TORCH_XNNPACK_DISABLE=1 python detect.py --weights runs/train/exp/best.torchscript -- source /images/
And also , I have upload a model exported without argument --optimize, you can have a try if above method not work, thanks。
Does the models exported work and detect the image like how you want?? On detect.py
Since I think the exporting to torch script reduces accuracy by small factor, which is why I think it's the problem
Does the models exported work and detect the image like how you want?? On detect.py
Since I think the exporting to torch script reduces accuracy by small factor, which is why I think it's the problem
No, as you can see
these were result on detect.py and on pytorch_lite, there were big difference。 When I use normal size picture, the difference is small。 I just report the problem, If you think the problem is acceptable, you may close this issue。Thanks for your time。
The point is I followed the official java example , and both of my implementations give me the same results, this is why I am thinking it's a model problem that's all,
Sorry If I was not able to help, will check it again when I got time
Hello, I use yolov5 for long picture detection like this :
when I use python (detect.py), the result is right:
while in mobile device,I use
_objectModel.getImagePrediction
and then_objectModel.renderBoxesOnImage
, the result is wrong,This is the result in example code, just change the model and label files,and also change "nc" config in
PytorchLite.loadObjectDetectionModel
function callFor nomal size image, for example 600* 400, the result is fine, Is there any config can fix this issue? thanks 。