Closed yossibiton closed 3 years ago
you'd better use vedadet.
Why should I prefer it ? It looks like you copied the code from mmdetection, and implemented the IoUAware head. Can you explain why didn't you use git fork ?
On Fri, Mar 19, 2021, 07:52 mileistone @.***> wrote:
you'd better use vedadet.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/Media-Smart/vedadet/issues/40#issuecomment-802576781, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABRXKAA7UJWGISRTL5Y4BGLTELRANANCNFSM4ZNAVQGA .
We re-design MMDetection based on our taste and needs, it's not the same as MMDetection, you may spend some time to compare vedadet and MMDetection if you have interest. If you like MMDetection, you can also use MMDetection, it's OK.
I'm trying to train a compatible model to tinaface on mmdetection repo. The only difference is that I don't use the IOU-aware head, but normal RetinaHead.
On the first batches, I can see that on mmdetection the bbox loss is twice higher than vedadet. Maybe you can explain that? Do you use different weights initialization than mmdetection? I was really surprised by that because I see your code is based on mmdetection (it actually a fork, although you didn't use github fork)