Open berserker opened 5 years ago
Now I have the following mask values for the above anchors: first yolo layer: mask = 3,4,5,6,7,8 (this is because I suppose 1,189 to be > of 60x60...right?) second yolo layer: mask = 2 (this is because I suppose 1,78 to be > of 30x30...right?) third yolo layer: mask = 0,1
Yes, try this.
Change these lines: https://github.com/AlexeyAB/darknet/blob/master/cfg/yolov3-tiny_3l.cfg#L198-L202
to these:
[upsample]
stride=4
[route]
layers = -1, 4
Thanks for the support @AlexeyAB! Please can you elaborate how to compute [upsample] and [route] to improve small objects detection? I'd like to test other configurations too but I don't know how to update those values accordingly.
@AlexeyAB Do you suggest to change the same lines
[upsample]
stride=4
[route]
layers = -1, 4
For the yolov3_tiny_pan_lstm.cfg to imrove small objects detection ?
@alexanderfrey Yes
@alexanderfrey Yes
When I do this for the last upsampling layer I receive the following error:
51 Layer before convolutional layer must output image.: File exists
darknet: ./src/utils.c:293: error: Assertion `0' failed.
@alexanderfrey You doing something wrong.
In any cases, it doesn't make sanse for PAN network, since PAN-block already do this.
Now I have the following mask values for the above anchors: first yolo layer: mask = 3,4,5,6,7,8 (this is because I suppose 1,189 to be > of 60x60...right?) second yolo layer: mask = 2 (this is because I suppose 1,78 to be > of 30x30...right?) third yolo layer: mask = 0,1
Yes, try this.
I got a very low mAP with calculated anchors :( (with the "default" configuration I reach ~60%):
Change these lines: https://github.com/AlexeyAB/darknet/blob/master/cfg/yolov3-tiny_3l.cfg#L198-L202
to these:
[upsample] stride=4 [route] layers = -1, 4
Default anchors with this change give me +10%! Here it is the current chart (still training...):
The project's target is to reach at least 90% of confidence and, once this new train will be completed, I need to figure out how to improve further the mAP. Do you think that one of these ideas could improve mAP (considering that I can "render" the input images with Blender)?
Thanks again for the support!!
So use default anchors. Just may be decreasey 2x-4x width of anchors, without chaning masks.
So use default anchors. Just may be decreasey 2x-4x width of anchors, without chaning masks.
Thanks, I'll try that 👍 Do you have any hint of my last 4 questions?
I'm following the advices as described in "How to improve object detection" but I'm using the "yolov3-tiny_3l" and I need to detect small objects. I have the following questions:
Thanks for your help!