Closed lxr-1204 closed 7 months ago
It is caused by https://github.com/MendelXu/SAN/blob/81a9a2bd79d433292d46cfa0597caea5005e0116/san/model/san.py#L270
After the slicing, only the foreground class will be preserved. So if you only have one class, the maximum class id for each pixel will always be 0 (which is foreground in your situation).
Several possible solutions could be:
focal loss
used in object detection.
Thank you for your work, When I used a private dataset (only one category) for training, I first faked VOC to register the dataset,
and then I used the code to train. The training loss seemed normal, but when I predicted, I Found that all the results are pure white (All pixels are foreground),
python train_net.py --config-file ./configs/san_clip_vit_res4_pranet.yaml --num-gpus 1 OUTPUT_DIR ./OUTPUT/vit_14 MODEL.SAN.NUM_CLASSES 1
can you provide me with some help? Or tell me where the problem might be. I would be very grateful!