FlamieZhu / Balanced-Contrastive-Learning

Code Release for “Balanced Contrastive Learning for Long-Tailed Visual Recognition”
MIT License
99 stars 13 forks source link

How to understand feat_mlp, logits, centers_logits in 2D image network? #1

Open whuhxb opened 2 years ago

whuhxb commented 2 years ago

Hi @FlamieZhu

I‘m trying to apply the loss you proposed to 3D point cloud. I have one question. How to understand feat_mlp, logits, centers_logits in 2D image network? If I use this loss for 3D semantic segmentation, how to correspond them? Thanks a lot.

I have seen that you used CE+SCL in training, but in testing just used CE. Could I still use CE+SCL in testing?

Xiaobing

FlamieZhu commented 1 year ago

Hi, Both feat_mlp and centers_logits are designed to boost representation learning, and the final prediction depends only on logits. I'm not familiar with 3D segmentation, but SCL is only used to help learn a more balanced feature space and help get better predictions, and it can be discarded during testing.

Serissa commented 1 year ago

Hi @FlamieZhu

I‘m trying to apply the loss you proposed to 3D point cloud. I have one question. How to understand feat_mlp, logits, centers_logits in 2D image network? If I use this loss for 3D semantic segmentation, how to correspond them? Thanks a lot.

I have seen that you used CE+SCL in training, but in testing just used CE. Could I still use CE+SCL in testing?

Xiaobing

Hi, Is the 3D segmentation effect improved?

whuhxb commented 1 year ago

Hi @FlamieZhu I‘m trying to apply the loss you proposed to 3D point cloud. I have one question. How to understand feat_mlp, logits, centers_logits in 2D image network? If I use this loss for 3D semantic segmentation, how to correspond them? Thanks a lot. I have seen that you used CE+SCL in training, but in testing just used CE. Could I still use CE+SCL in testing? Xiaobing

Hi, Is the 3D segmentation effect improved?

Still trying.