My issue is I want to detect tiny objects in images (e.g. eyes) and trying to do so, always the network learns to predict background. This indeed is about the 99% of the image, so the prediction is pretty accurate.
I tried the same trick as yours at stage 1, using cropped images, extracting all eyes on each image. Here the background is around 40% of the image, but at training stage learns to predict all image as background again. I also have different classes (24), all of these are little objects on the image.
Do you think your architecture is suitable for these kind of problems?
Exist any chance to penalize the misclassification of others categories except background?
Hello,
My issue is I want to detect tiny objects in images (e.g. eyes) and trying to do so, always the network learns to predict background. This indeed is about the 99% of the image, so the prediction is pretty accurate.
I tried the same trick as yours at stage 1, using cropped images, extracting all eyes on each image. Here the background is around 40% of the image, but at training stage learns to predict all image as background again. I also have different classes (24), all of these are little objects on the image.
Do you think your architecture is suitable for these kind of problems?
Exist any chance to penalize the misclassification of others categories except background?