Darknet unexpected behaviour: no translational symmetry when detecting.
What I did:
Created a centered red square in a white image and a completely white image
Labeled the red square
Trained a network (yolov3 tiny modified for one class)
Detect on red squares with white backgrounds in different positions
Some examples:
Square in (222, 305): prob = 99%
Square in (365, 124): prob = 19%
This behaviour was not entirely random. You can see here the detection probabilities depending on the position (lower than 25% was set to 0 as per default thresholding):
As you can see, the network does catch information from the object (the obvious red mask), yet the results are very different depending on the position and with some sort of pattern. Further training reduces this issue, but does not mitigate it entirely.
Darknet unexpected behaviour: no translational symmetry when detecting.
What I did: Created a centered red square in a white image and a completely white image Labeled the red square Trained a network (yolov3 tiny modified for one class) Detect on red squares with white backgrounds in different positions
Some examples:
Square in (222, 305): prob = 99%
Square in (365, 124): prob = 19%
This behaviour was not entirely random. You can see here the detection probabilities depending on the position (lower than 25% was set to 0 as per default thresholding):
As you can see, the network does catch information from the object (the obvious red mask), yet the results are very different depending on the position and with some sort of pattern. Further training reduces this issue, but does not mitigate it entirely.