Open jairolucas opened 3 years ago
don't train from scratch
and try higher lr like 1e-3
Thanks for the feedback Zilo. Unfortunately, even using pre-trained weights, and training for 300 epochs, I got a mAP of only 47%. Could my images have different width and height (503 x 672) be the cause of poor performance?
what's your training command?
python train.py -c 2 -p papaya --batch_size 4 --lr 1e-3 --load_weights weights/efficientdet-d2.pth
Important detail: I'm detecting 2 types of objects: A fruit, which has a size that varies between 30% and 80% of the image, and the disease in the fruit, which has a size between 10% and 70% of the image. Are the anchors defined in Coco suitable for these objects? (the image is 503 x 672)
try training on d0 especially when you don't have enough images. you can use this to generate proper anchors. https://github.com/mnslarcher/kmeans-anchors-ratios
How did you calculate the mAP?
with coco_eval
@zylo117 Can you pls elaborate a bit? I need to calculate mAP as well ..
I'm using efficientDet-D1 to train a custom dataset with papaya fruit disease images (from scratch). There are approximately 15,000 images of 503 x 672 (w x h), divided into 7 classes.
I use the parameters informed in the tutorial.
After training with 500 seasons I have a mAP of around 15%. (with -1% on small objects) I use the same dataset on Yolo4 with a mAP of 56%. Do you have any hints of what could be wrong?
to train: python3 train.py -c 1 -p papaya --batch_size 4 --lr 1e-5
To test python3 coco_eval.py -p papaya -c 1 -w logs / papaya / papaya-d1.pth