Yang-Bob / PMMs

Prototype Mixture Models for Few-shot Semantic Segmentation
163 stars 27 forks source link

question about K-shot setting #4

Open JJ-res101 opened 4 years ago

JJ-res101 commented 4 years ago

Hi! I found no settings for K-shot in voc_train.py or coco_train.py, but they appear in voc_val.py and coco_val. So how do I experiment with K-shot settings?

JJ-res101 commented 4 years ago

Another question is that layer55 and Layer56 in FPMMS.py are different from layer55 and Layer56 in FRPMMs.py. The latter has dorpout while the former has BatchNormalization. I wonder why?

Yang-Bob commented 4 years ago

Hi! In our method, the differences between k-shot setting and 1-shot setting are only in inference stage and the training stage of them are the same. For k-shot setting in training stage, the models are trained by 1-shot mode; in inference stage, k support images are sent to the PMMs together to estimate prototypes. Considering the RPMMs is much more complex than PMMs, we change BatchNormalization to dorpout in order to avoid the model overfitting.

JJ-res101 commented 4 years ago

I see. Thanks!

qiulesun commented 4 years ago

@Yang-Bob You said that in training stage, k-shot setting and 1-shot setting are the same and their differences are only in the inference. Is this normal practice ?