ShikunLi / Sel-CL

CVPR 2022: Selective-Supervised Contrastive Learning with Noisy Labels
89 stars 17 forks source link

Training Efficiency #2

Open LinghaoChan opened 2 years ago

LinghaoChan commented 2 years ago

This is a good work. However, the training efficiency is somehow low~

In the training stage, the GPU utilization is about 50-60%. In the "pare-wise selection" stage, the GPU utilization is approximate 0.

At first, I think it is because the program executes the CPU operations in the "pare-wise selection" stage. I set up some checkpoints in the "pare-wise selection" stage, and find that most of the time was spent on "Weighted k-nn correction" (code link).

Looking forward to your suggestions for training efficiently~~~

Hlic818 commented 2 years ago

I just got 91.31 % top1 Accuracy on dataset 'CIFAR-10' (symmetric noise 0.2). Did you reproduce the author's experimental results in the original paper?

miladdehghani1 commented 1 year ago

How much RAM is needed to run the codes of this article? (I use COLAB to run this code, but RAM crashes)