ReScience / submissions

ReScience C submissions
28 stars 7 forks source link

[Re] A Simple Framework for Contrastive Learning of Visual Representations #76

Open ADevillers opened 7 months ago

ADevillers commented 7 months ago

Original article: T. Chen, S. Kornblith, M. Norouzi, and G. Hinton. “A simple framework for contrastive learning of visual repre- sentations.” In: International conference on machine learning. PMLR, 2020, pp. 1597–1607

PDF URL: https://github.com/ADevillers/SimCLR/blob/main/report.pdf Metadata URL: https://github.com/ADevillers/SimCLR/blob/main/report.metadata.tex Code URL: https://github.com/ADevillers/SimCLR/tree/main

Scientific domain: Representation Learning Programming language: Python Suggested editor: @rougier

rougier commented 7 months ago

Thanks for your submission and sorry for the delay. We'll assign an editor soon.

rougier commented 7 months ago

@gdetor @benoit-girard @koustuvsinha Can any of you edit this submission?

benoit-girard commented 7 months ago

I can do it!

benoit-girard commented 7 months ago

Good news: @charlypg has accepted to review this paper and its companion!

benoit-girard commented 7 months ago

@pps121 would you like to review this paper? And possibly (or aletrnatively) its companion paper https://github.com/ReScience/submissions/issues/77 ? Let me know!

charlypg commented 6 months ago

Hello everybody. I am going to review SimCLR then BYOL. I have a lot to do for my own research during 2 weeks but I think I can do your review before the 25. Is it ok for you ? It will also depend on the required computational resources.

benoit-girard commented 6 months ago

@bsciolla would you like to review this paper? And possibly (or alternatively) its companion paper https://github.com/ReScience/submissions/issues/77 ? Let me know!

benoit-girard commented 5 months ago

@cJarvers would you like to review this paper? And possibly (or alternatively) its companion paper https://github.com/ReScience/submissions/issues/77 ? Let me know!

benoit-girard commented 5 months ago

@schmidDan would you like to review this paper? And possibly (or alternatively) its companion paper https://github.com/ReScience/submissions/issues/77 ? Let me know!

benoit-girard commented 5 months ago

@charlypg do you have an idea when you could be able to deliver your review?

benoit-girard commented 5 months ago

@mo-arvan would you like to review this paper? And possibly (or alternatively) its companion paper https://github.com/ReScience/submissions/issues/77 ? Let me know!

benoit-girard commented 5 months ago

@pena-rodrigo would you like to review this paper? And possibly (or alternatively) its companion paper https://github.com/ReScience/submissions/issues/77 ? Let me know!

benoit-girard commented 5 months ago

@bagustris would you like to review this paper? And possibly (or alternatively) its companion paper https://github.com/ReScience/submissions/issues/77 ? Let me know!

benoit-girard commented 5 months ago

@birdortyedi would you like to review this paper? And possibly (or alternatively) its companion paper https://github.com/ReScience/submissions/issues/77 ? Let me know!

benoit-girard commented 5 months ago

@MiWeiss would you like to review this paper? And possibly (or alternatively) its companion paper https://github.com/ReScience/submissions/issues/77 ? Let me know!

MiWeiss commented 5 months ago

@MiWeiss would you like to review this paper? And possibly (or alternatively) its companion paper #77 ? Let me know!

Hi @benoit-girard. Unfortunately, I am currently not available - and I am afraid I also would not have quite the compute needed to run the code of this paper ;-)

charlypg commented 5 months ago

Hello everybody. I am really sorry for the delay. First of all, thank you for this work that may benefit to the community because reproduction in machine learning is always complicated as tips and tricks are not always precised in the articles themselves.
Here are two lists, one for the good aspects and another one for the problems I encountered.

Good :

Problems :

ADevillers commented 4 months ago

Dear Reviewer (@charlypg),

Thank you very much for your insightful feedback.

I will do my best to ensure that I provide the minimal configuration required to run the code on a single (non-JeanZay) GPU machine as soon as possible. However, I would like to highlight a challenge: currently, I do not have access to a machine with these specifications. My resources are limited to Jean Zay and a CPU-only laptop, which may complicate the development and testing of the configuration (hopefully, this will not be the case for a long time).

Regarding the "Error tracker: world_size missing argument for tracker" issue, it is my bad (and it is now fixed). This error was indeed a typo on my part, coming from recent code updates related to the warning mentioned right after in your review.

Thus, for this warning "A reduction issue may have occurred (abs(50016.0 - 1563.0*1) >= 1)," this problem is attributed to an unresolved issue within PyTorch's distributed operations that can lead to illogical reduction, leading to erroneous results (for further details, please refer to: https://discuss.pytorch.org/t/distributed-all-reduce-returns-strange-results/89248). Unfortunately, if this warning is triggered, it indicates that the results of the current epoch (often the final one) are unreliable. The recommended approach in this case is to restart the experiment from the previous checkpoint.

Regarding the top-5 accuracy metric, it should be automatically calculated and available through TensorBoard. Could you please clarify if you encountered any difficulties in accessing these results?

Best regards, Alexandre DEVILLERS

charlypg commented 4 months ago

Dear @ADevillers ,

Thank you for your response. I will try the evaluation on other checkpoints. By the way, what do "even" and "odd" mean regarding checkpoints ?

Thank you in advance, Charly PECQUEUX--GUÉZÉNEC

ADevillers commented 4 months ago

Dear @charlypg,

To clarify this part of the checkpointing strategy, this involves alternating saves between "odd" and "even" checkpoints at the end of each respective epoch. This trick ensures that if a run fails during an odd-numbered epoch, we have the state from the preceding epoch in the "even" checkpoint, and vice versa.

Please feel free to reach out if you have any further questions.

Best regards, Alexandre

benoit-girard commented 4 months ago

@charlypg : thanks a lot for the review.

benoit-girard commented 4 months ago

@MiWeiss would you like to review this paper? And possibly (or alternatively) its companion paper #77 ? Let me know!

Hi @benoit-girard. Unfortunately, I am currently not available - and I am afraid I also would not have quite the compute needed to run the code of this paper ;-)

Thanks a lot for your answer.

benoit-girard commented 4 months ago

@ReScience/reviewers I am looking for a reviewer with expertise in machine learning to review this submission and possibly (or alternatively) its companion paper https://github.com/ReScience/submissions/issues/77

charlypg commented 4 months ago

Dear @ADevillers ,

Thank you for your answer.

I have a question about the training. Once the job corresponding to "run_simclrimagenet.slurm" has successfully ended, I only obtain one checkpoint of the form "expe[jobid][epoch_number].pt". If I understand your paper well ("Jobs too long and checkpoints" paragraph), you submit the same slurm multiple times to reach 800 epochs ? If yes, is the checkpoint, from which you start training, the only thing you modify in the slurm script ?

Best regards, Charly PECQUEUX--GUÉZÉNEC

ADevillers commented 4 months ago

Dear @charlypg ,

Yes, the script itself remains unchanged; the only variation is in the checkpoint used. Initially, no checkpoint is provided for the first execution. Then, I use the last checkpoint from the preceding job. This checkpoint contains all pertinent data, including the current epoch, scheduler, optimizer, and model state, allowing the training to resume from where it was interrupted. Note that you should not modify the other hyperparameters while doing so, as this may lead to unexpected behaviors.

Best regards, Alexandre

charlypg commented 2 months ago

Dear @ADevillers ,

I am sorry for my late response.

I could reproduce top-1 results on Jean Zay. So the reproduction seems convincing to me.

However, I cannot find the top-5 results. I saw there is a folder "runs" but much of my evaluation results have not been stored in it.

Best regards, Charly PECQUEUX--GUÉZÉNEC

ADevillers commented 2 months ago

Dear @charlypg,

Your runs should normally be stored in the "runs" folder under a format readable by tensorboard and contains all the curves (including Top-5 acc).

Note that, when starting from a checkpoint, the data will append to the file corresponding to the run of the checkpoint. Therefore, a run on ImageNet, even if it requires 6 to 7 restarts from a checkpoint, will only produce one file (that will contain everything).

To find out where the issue could be, can you please answer the following questions:

  1. Is your "runs" folder empty?
  2. Have you been able to open tensorboard with the "runs" folder?
  3. If so, do you see any runs/curves?
  4. Are you able to find in the runs list the ones starting with the same ID as the first job of your run?
  5. If so, is there any curve you are able to see for these runs?

Best, Alexandre DEVILLERS

rougier commented 1 month ago

@benoit-girard Gentle reminder