user0407 / CLUDA

Implementation of CLUDA: Contrastive learning in Unsupervised Domian Adaptation in Semantic Segmentation
22 stars 2 forks source link

The question about implementation of contrastive loss in CLUDA #6

Open super233 opened 1 year ago

super233 commented 1 year ago

Hi, I'm a newbie to contrastive learning and I'm sorry to bother you again. I have doubts about the implementation of contrastive_loss in mmseg/models/losses/contrastive_loss.py.

I have noticed that you process labels_ with view(), and key codes are as the following:

labels_ = labels_.view(labels_.shape[0], -1, 1)  # (B, H * W, 1)
labels = labels_.view(-1, 1)  # (B * H * W, 1)

However, you have used unbind() to generate contrast_feature, which could cuase the mismatch between labels and contrast_feature in the order of internal data, and key codes are as the following:

feats_ = feats_.permute(0, 2, 3, 1)  # (B, H, W, C)
feats_ = feats_.view(feats_.shape[0], -1, feats_.shape[3])  # (B, H * W, C)
feats_ = F.normalize(feats_, dim=-1)

contrast_feature = torch.cat(torch.unbind(feats_, dim=1), dim=0)  # H * W tensors with shape of (B, C) are cancatenated into a tensor with shape of (H * W * B, C)

Please note that tensor feats_ with shape of (B, H W, C) are converted into H W tensors with shape of (B, C) by unbind(), then H W tensors are concatenated into tensor contrast_feature with shape of (H W * B, C). Although contrast_feature and labels have right shape, the orders of their internal data are not matched.

For more details, please see the following picture: a

user0407 commented 1 year ago

Hi,

Thank you for your input. I will look into this and get back to you as soon as possible.

On Fri, Jan 27, 2023, 1:11 PM Wenqi Tang @.***> wrote:

Hi, I'm a newbie to contrastive learning and I'm sorry to bother you again. I have doubts about the implementation of contrastive_loss in mmseg/models/losses/contrastive_loss.py.

I have noticed that you process labels_ with view(), and key codes are as the following:

labels = labels.view(labels.shape[0], -1, 1) # (B, H * W, 1)labels = labels.view(-1, 1) # (B H W, 1)

However, you have used unbind() to generate contrast_feature, which could cuase the mismatch between labels and contrast_feature, and key codes are as the following:

feats = feats.permute(0, 2, 3, 1) # (B, H, W, C)feats = feats.view(feats.shape[0], -1, feats.shape[3]) # (B, H W, C)feats = F.normalize(feats, dim=-1) contrastfeature = torch.cat(torch.unbind(feats, dim=1), dim=0) # H W tensors with shape of (B, C) are cancatenated into a tensor with shape of (H W B, C)

Please note that tensor feats_ with shape of (B, H W, C) are converted into H W tensors with shape of (B, C) by unbind(), then H W tensors are concatenated into tensor contrast_feature with shape of (H W * B, C). Although contrast_feature and labels have the same shape, the orders of their internal data are not matched.

For more details, please see the following picture: [image: a] https://user-images.githubusercontent.com/35358984/215033076-220c82f2-9e5e-4538-81b5-8547b0540b76.jpg

— Reply to this email directly, view it on GitHub https://github.com/user0407/CLUDA/issues/6, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2CF63FZN5TCGHLOPCKFRZ3WUN3Y7ANCNFSM6AAAAAAUIMLXZU . You are receiving this because you are subscribed to this thread.Message ID: @.***>

super233 commented 1 year ago

Hi, I'm a newbie to contrastive learning and I'm sorry to bother you again. I have doubts about the implementation of contrastive_loss in mmseg/models/losses/contrastive_loss.py.

I have noticed that you process labels_ with view(), and key codes are as the following:

labels_ = labels_.view(labels_.shape[0], -1, 1)  # (B, H * W, 1)
labels = labels_.view(-1, 1)  # (B * H * W, 1)

However, you have used unbind() to generate contrast_feature, which could cuase the mismatch between labels and contrast_feature in the order of internal data, and key codes are as the following:

feats_ = feats_.permute(0, 2, 3, 1)  # (B, H, W, C)
feats_ = feats_.view(feats_.shape[0], -1, feats_.shape[3])  # (B, H * W, C)
feats_ = F.normalize(feats_, dim=-1)

contrast_feature = torch.cat(torch.unbind(feats_, dim=1), dim=0)  # H * W tensors with shape of (B, C) are cancatenated into a tensor with shape of (H * W * B, C)

Please note that tensor feats_ with shape of (B, H W, C) are converted into H W tensors with shape of (B, C) by unbind(), then H W tensors are concatenated into tensor contrast_feature with shape of (H W * B, C). Although contrast_feature and labels have right shape, the orders of their internal data are not matched.

For more details, please see the following picture: a

@user0407 How do you think about it? I'm looking forward your reply.