LongLong-Jing / Cross-Modal-Center-Loss

Cross-Modal Center Loss for 3D Cross-Modal Retrieval (CVPR2021)
30 stars 9 forks source link

The weight of the loss functions #6

Open huacong opened 2 years ago

huacong commented 2 years ago

I am really interested in your work. I really want to know the weights of the cross-modal center loss, ce loss and the mse loss. I think it matters. Now I use your default values(ce loss:1, cross-modal center loss:1,mse_loss:0.1) in your code, but it doesn't work. I am looking forward to your reply!

huacong commented 2 years ago

I am so confused. I am looking forward to your reply. Thanks very much!

huacong commented 2 years ago

In train.py, you set 1.0 as the default value of weight_center. I think it's too large!

huacong commented 2 years ago

When I train the model,it didn't perform well. I hope you share me your experiences. After 25 epochs, the results follows: image2image 58.07 point2image 54.44 mesh2image 65.04 point2point 68.81 image2point 57.61 mesh2point 74.24 mesh2mesh 82.88 image2mesh 60.3 point2image 68.44 I don't know why I get bad results, and I use your default values in your code. I am really interested in your work. I am looking forward to your reply.

LongLong-Jing commented 2 years ago

Hi @huacong,

Thanks for your interest. Are you using the default batch size and use all three modalities for training?

Our default setting for batch size is 96. There are around 9.8k objects in the training split. In your case, 25 epochs would correspond to 2500 iterations. By training the model longer, you should be able to get comparable performance as reported in our paper. Feel free to email me if you have any other questions.