twke18 / SPML

Universal Weakly Supervised Segmentation by Pixel-to-Segment Contrastive Learning
https://twke18.github.io/projects/spml.html
MIT License
104 stars 26 forks source link

How to predict prototype label without ground truth semantic label #6

Open Anna0509 opened 3 years ago

Anna0509 commented 3 years ago

Hi, In "pyscripts/inference/prototype.py" (line 202), the ground truth "semantic label" is required to predict "prototype_labels". How can I get the prototype_labels if I only have partial (e.g. 10 pixels) ground truth "semantic label"?

twke18 commented 3 years ago

The script assumes that you have per-pixel semantic segmentation labels. For inference under weakly-supervised settings, there are two steps: 1) using the learned features to propagate softmax classifier predictions (this line), 2) re-train (only) the softmax classifier (this line) for predict the semantic segmentation results (this line).

jyhjinghwang commented 3 years ago

@Anna0509 Have you successfully done predictions w/o any labeled set with the instructions that @twke18 provided?