uzh-rpg / RVT

Implementation of "Recurrent Vision Transformers for Object Detection with Event Cameras". CVPR 2023
MIT License
319 stars 41 forks source link

Resume training error #5

Closed Qiuben closed 1 year ago

Qiuben commented 1 year ago

Hi, magehrig

sorry for the bother, I have met some problems when resuming the training.

I trained the model with three GPUs, and I set the config of wandb as :

wandb_runpath: zhang20010218/RVT/ja1260m8 # WandB run path. E.g. USERNAME/PROJECTNAME/1grv5kg6 artifact_name: zhang20010218/RVT/checkpoint-ja1260m8-topK:v1 # Name of checkpoint/artifact. Required for resuming. E.g. USERNAME/PROJECTNAME/checkpoint-1grv5kg6-last:v15 artifact_local_file: RVT/ja1260m8/checkpoints/last_epoch=000-step=100000.ckpt # If specified, will use the provided local filepath instead of downloading it. Required if resuming with DDP. resume_only_weights: False group_name: version1.0 # Specify group name of the run project_name: RVT

Other than that, I didn't make any changes.

However, I met the error like that: File "/home/zht/Python_project/RVT/loggers/wandb_logger.py", line 218, in after_save_checkpoint self._scan_and_log_checkpoints(checkpoint_callback, self._save_last and not self._save_last_only_final) File "/home/zht/Python_project/RVT/loggers/wandb_logger.py", line 321, in _scan_and_log_checkpoints self._rm_but_top_k(checkpoint_callback.save_top_k) File "/home/zht/Python_project/RVT/loggers/wandb_logger.py", line 343, in _rm_but_top_k score = artifact.metadata['score'] KeyError: 'score'

Do you know what's going on, is there something wrong with my config setting?

magehrig commented 1 year ago

The config you have seems to be correct for resuming the run. I have never encountered this error. It appears that the score (which is the validation score) is missing in the metadata. Can you check (e.g. on wandb) if the logged artifact has the "score" in the metadata?

I am starting a run myself with this code to see if I can reproduce the issue. Can you confirm that you followed the installation instructions? For that purpose please post the output of conda list here.

Btw, I think it's also a bit strange that you have ../checkpoint-ja1260m8-topK:v1 as artifact_name but ../last_epoch=000-step=100000.ckpt as artifact_local_file. Are you validating every 50k steps?

magehrig commented 1 year ago

could replicate the issue and will look into it in the next few days

Qiuben commented 1 year ago

Thank you for your response!

I have used the same Python environment as you (installed exactly as instructed).

Also, it is true that I evaluate every 50 steps. Did this operation cause my error? If I evaluate every 10k steps, will the error not occur again?

You mentioned that you have reproduced the error, so I will not provide the output of conda list or check for the "score" in wandb for now (since I am a beginner with wandb and am not sure how to use it yet). If you need me to describe any of my settings or environment, please let me know.

magehrig commented 1 year ago

Can you try again? It works for me now.

So there is a combination of weird stuff happening. 1) Pytorch Lightning executes the model checkpoint callback which attempts to save the latest checkpoint, although we are actually resuming. It may be related to this PL issue: https://github.com/Lightning-AI/lightning/issues/12724. However, by itself it was not a problem for me so far. 2) I wrote a custom wandb logger which periodically uses the wandb (cloud) API to check how many artifacts are present and deletes old one. This is a bit brittle because it relies on the wandb service to actually be reachable and working bug-free all the time.

I suspect there was a wandb server side issue that they may have fixed now.

magehrig commented 1 year ago

If I am actually correct in my suspicion, a quick workaround to prevent future issues like this is to use the default PL wandb logger or any other logger which typically save checkpoints locally and do not rely on the wandb cloud api.

Qiuben commented 1 year ago

Are you saying that the error is caused by an issue with the wandb cloud server itself, and that you didn't modify the code but it still works now? However, I just tried it and encountered the same error. I don't know if there is a problem with the wandb cloud itself again, and I don't know if you are also unable to work properly now. This is really a strange issue, maybe there is some luck involved during runtime. same error as:

File "/home/zht/Python_project/RVT/loggers/wandb_logger.py", line 321, in _scan_and_log_checkpoints self._rm_but_top_k(checkpoint_callback.save_top_k) File "/home/zht/Python_project/RVT/loggers/wandb_logger.py", line 343, in _rm_but_top_k score = artifact.metadata['score'] KeyError: 'score'

Qiuben commented 1 year ago

I only tried to run resume training again. Do I need to train again before trying resume training since the previous issue was that wandb did not receive the data I uploaded during the previous training?

magehrig commented 1 year ago

Yes, only resume again. Unfortunately, I cannot reproduce the issue anymore which makes it hard to debug. I created a branch that should fix the issue (in a hacky way): https://github.com/uzh-rpg/RVT/tree/resume-issue Let me know if thtat works for you.

Qiuben commented 1 year ago

Thank you for your prompt response!

I will try as soon as possible and give you feedback on the results.

Qiuben commented 1 year ago

Great! I think you have solved the problem and now I can continue training. However, at the beginning of the next training phase, the displayed speed seems a bit strange, like this:

Epoch 0: : 103073it [05:24, 317.98it/s, loss=2.84, v_num=60m8]

The displayed speed does not match the actual speed, but the displayed speed is slowly decreasing. However, now I can resume the training normally, so I would like to close the issue!

Qiuben commented 1 year ago

And thank you again for your patient and helpful guidance. You have been a great help to me!

magehrig commented 1 year ago

yeah, that is a known issue but does not impact training. You can ignore it. Glad that it solved your problem :)