Open mrsadeghi95-preteckt opened 2 years ago
I guess it is due to the multi-gpu implementation. How are the ICH and CCH losses computed in your multi-gpu implementation?
I didn't change them, I just added some commands for converting the network and data loader to multiple gpus. I can share the code with you if you like.
In contrastive learning, samples need to be gathered from other GPUs to compute the InfoNCE loss. You may refer to our new repo https://github.com/Yunfan-Li/Twin-Contrastive-Learning for the multi-gpu implementation.
Hi, Thank you for sharing your code. I would like to reproduce your results on CIFAR-10, I ran your original code with 4 GPUs and the results are attached below. My final ACC (NMI) after 990 epochs is about 69% (64%). Did you use any special method and/or hyperparameters for training your network which is not uploaded on Github? I would appreciate it if you could help me to reproduce your results. result_batch_256.txt