Closed LinglanZhao closed 4 years ago
Hi @LinglanZhao , sorry for the late reply.
You can set transforms
to None
in https://github.com/kaixin96/PANet/blob/master/test.py#L60 and modify https://github.com/kaixin96/PANet/blob/master/models/fewshot.py#L55 to forward support and query images separately.
Thank you.
Thanks for your reply!
Hi Kaixin, It seems that during both training and testing both the support and query images are resized to a fixed size (e.g. [417, 417]). However, in many few-shot segmentation works, the segmentation mask output is resized to the original image resolution for evaluation. How can I get the original query images and the corresponding ground-truth masks during testing phase? I also output some unused key-value pairs in the dataloader dictionary, but they seem to be of the same fixed shape:
sample_batched['support_images_t'][0][0].shape = torch.Size([1, 3, 417, 417]) sample_batched['query_images_t'][0].shape = torch.Size([1, 3, 417, 417]) sample_batched['query_masks'][0][0].shape = torch.Size([1, 1, 417, 417]) sample_batched['query_labels'][0].shape = torch.Size([1, 417, 417])