meituan / YOLOv6

YOLOv6: a single-stage object detection framework dedicated to industrial applications.
GNU General Public License v3.0
5.71k stars 1.03k forks source link

Dfl loss 0 when finetune YOLOv6n and YOLOv6s #864

Closed Leo-Thomas closed 1 year ago

Leo-Thomas commented 1 year ago

Before Asking

Search before asking

Question

I am finetuning YOLOv6n, YOLOv6s, YOLOv6m, and YOLOv6l on a custom dataset. I have followed all relevant tutorials. When I train YOLOv6m and YOLOv6l there is no problem, but when training YOLOv6n and YOLOv6s, on the same dataset, the dfl loss is always 0.

issueYOLOv6

I tried changing things like batch size and optimizer as suggested in other issues, but it doesn't work.

Thanks in advance for any help.

Additional

No response

mtjhl commented 1 year ago

It is because N/S model does not use DFL, you can set it to True to use DFL here. https://github.com/meituan/YOLOv6/blob/4364f29bf3244f2e73d0c42a103cd7a9cbb16ca9/configs/yolov6s.py#L32

zl021369 commented 3 months ago

It is because N/S model does not use DFL, you can set it to True to use DFL here.

https://github.com/meituan/YOLOv6/blob/4364f29bf3244f2e73d0c42a103cd7a9cbb16ca9/configs/yolov6s.py#L32

我将其设为True之后 报错RuntimeError: CUDA error: device-side assert triggered

1348545336 commented 1 month ago

It is because N/S model does not use DFL, you can set it to True to use DFL here. https://github.com/meituan/YOLOv6/blob/4364f29bf3244f2e73d0c42a103cd7a9cbb16ca9/configs/yolov6s.py#L32

我将其设为True之后 报错RuntimeError: CUDA error: device-side assert triggered

把33行的代码改成reg_max=16就好了