tfzhou / ContrastiveSeg

ICCV2021 (Oral) - Exploring Cross-Image Pixel Contrast for Semantic Segmentation
MIT License
667 stars 88 forks source link

loss_contrast #54

Open Summer77723 opened 2 years ago

Summer77723 commented 2 years ago

Hello. Thanks for your great work. I have some questions about the code,what is the meaning of with_embed,Why is loss_contrast not used? 1.if with_embed is True: return loss + self.loss_weight * loss_contrast

    return loss + 0 * loss_contrast  # just a trick to avoid errors in distributed training
  1. if is_distributed(): import torch.distributed as dist def reduce_tensor(inp): """ Reduce the loss from all processes so that process with rank 0 has the averaged results. """ world_size = get_world_size() if world_size < 2: return inp with torch.no_grad(): reduced_inp = inp dist.reduce(reduced_inp, dst=0) return reduced_inp

            loss = self.pixel_loss(outputs, targets, with_embed=with_embed)  
    
            backward_loss = loss
            display_loss = reduce_tensor(backward_loss) / get_world_size()
        else:
            backward_loss = display_loss = self.pixel_loss(outputs, targets)

Looking forword to your reply!

tfzhou commented 2 years ago

Hi, @Summer77723, our code has a warmup stage, in which the cotrastive loss is not applied, i.e., the weight of contrastive loss is zero.

Summer77723 commented 2 years ago

Thank you for your reply.