The protocol states that the first 6 annotated images should include both foreground and background and not more than 10 times as much background as foreground.
The problem is that if too much background is labelled, then the model will tend to only predict background. I believe the problem is still happening for some datasets with a more extreme class imbalance, even if the first 6 images are labelled in accordance with the protocol.
Investigate the situation where the model starts only predicting background. Is it due to the number of images with only background annotated or the ratio of foreground to background?
Is there are better more automatic solution, i.e instance selection to ensure foreground is always included in the batch?
The protocol states that the first 6 annotated images should include both foreground and background and not more than 10 times as much background as foreground.
The problem is that if too much background is labelled, then the model will tend to only predict background. I believe the problem is still happening for some datasets with a more extreme class imbalance, even if the first 6 images are labelled in accordance with the protocol.
Investigate the situation where the model starts only predicting background. Is it due to the number of images with only background annotated or the ratio of foreground to background?
Is there are better more automatic solution, i.e instance selection to ensure foreground is always included in the batch?