Open eyalzubia opened 6 years ago
for training for small objects - set
layers = -1, 11
instead of https://github.com/AlexeyAB/darknet/blob/6390a5a2ab61a0bdf6f1a9a6b4a739c16b36e0d7/cfg/yolov3.cfg#L720 and setstride=4
instead of https://github.com/AlexeyAB/darknet/blob/6390a5a2ab61a0bdf6f1a9a6b4a739c16b36e0d7/cfg/yolov3.cfg#L717
I'm training a dataset with images of all sizes. Many are small. I estimated the anchors using the provided tool. During training, I see that smaller objects are not detected (or have much lower confidence). Also, in the training log, the recall for the small objects (in layer 106) is much smaller than in the previous two layers. What can be the cause of this, and how can I solve it? I thought of: increasing the weight for the loss of the final layer, or inputting larger images by setting different parameters in the data augmentation for training.
Could there be a more fundamental cause for this behaviour?
Thanks.