I believe in this part of the code another label for 255:"Background" should be added to labels dict fot training, the reason is if this is not done, the prediction masks would never contain background, so in the current implementation after inference if I check the unique labels the whole image would consist of labels from 0-149, but never the background class. However, that is technically incorrect. Feel free to give me a feedback if you think otherwise.
# load id2label mapping from a JSON on the hub
repo_id = "huggingface/label-files"
filename = "ade20k-id2label.json"
id2label = json.load(open(hf_hub_download(repo_id=repo_id, filename=filename, repo_type="dataset"), "r"))
# NOTE: My sugesstion
id2label[255] = "Background"
id2label = {int(k): v for k, v in id2label.items()}
label2id = {v: k for k, v in id2label.items()}
One can check this by torch.Tensor(predicted_segmentation_map).squeeze().unique() after training while running inference that all pixels are assigned to one of the labels (without ever getting background)
In Semantic Segmentation notebook:
I believe in this part of the code another label for 255:"Background" should be added to labels dict fot training, the reason is if this is not done, the prediction masks would never contain background, so in the current implementation after inference if I check the unique labels the whole image would consist of labels from 0-149, but never the background class. However, that is technically incorrect. Feel free to give me a feedback if you think otherwise.
One can check this by
torch.Tensor(predicted_segmentation_map).squeeze().unique()
after training while running inference that all pixels are assigned to one of the labels (without ever getting background)