Closed Brightlcz closed 2 months ago
Thanks for your interest!
For training on a machine equipped with single GPU, please try to run the following command:
$ python -m torch.distributed.launch --nproc_per_node=1 train_vimeo90k.py --world_size 1 --model_name 'IFRNet' --epochs 300 --batch_size 6 --lr_start 1e-4 --lr_end 1e-5
You can increase batch size if your GPU memory is enough.
Hello author, thank you for your excellent work.
Your code seems to be for multi-card distributed training. I only have one GPU. Could you please tell me where to change it for single-card training, so that it can be debugged with a single card.
Looking forward to your reply.