Hi Thalles,
I went through the code and find two things I can't understand:
(1) in code "labels = torch.zeros(2 * self.batch_size).to(self.device).long()", in nt_xent.py, seems the label is constantly 1. So the label is not used since its all 0?
(2) is adam + "scheduler = torch.optim.lr_scheduler.CosineAnnealingLR" same as the LARS optimizer?
Hello guys, that is a general message to say that I have refactored the whole project. Please I a look at the new impl and free to submit PR if you find any bugs. Thanks.
Hi Thalles, I went through the code and find two things I can't understand: (1) in code "labels = torch.zeros(2 * self.batch_size).to(self.device).long()", in nt_xent.py, seems the label is constantly 1. So the label is not used since its all 0? (2) is adam + "scheduler = torch.optim.lr_scheduler.CosineAnnealingLR" same as the LARS optimizer?
Thanks in advance.