Closed chenjiachengzzz closed 2 years ago
Hi @chenjiachengzzz, below is the Linux command we use for large-patch finetuning, with 4 GPUs
CUDA_VISIBLE_DEVICES=0,1,2,3 python -u -m torch.distributed.run --nproc_per_node=4 \
--master_port=8889 main.py --distributed --config-base configs/RCAN/RCAN_Improved.yaml \
--config-file configs/RCAN/RCAN_x2_LP.yaml SOLVER.ITERATION_RESTART True SOLVER.ITERATION_TOTAL 40000 \
SOLVER.BASE_LR 8e-4 MODEL.PRE_TRAIN baseline/RCAN_x2_baseline/model/model_latest.pth.tar
Please change the checkpoint to your pretrained model path. Let me know if you have additional questions!
Hi @chenjiachengzzz, below is the Linux command we use for large-patch finetuning, with 4 GPUs
CUDA_VISIBLE_DEVICES=0,1,2,3 python -u -m torch.distributed.run --nproc_per_node=4 \ --master_port=8889 main.py --distributed --config-base configs/RCAN/RCAN_Improved.yaml \ --config-file configs/RCAN/RCAN_x2_LP.yaml SOLVER.ITERATION_RESTART True SOLVER.ITERATION_TOTAL 40000 \ SOLVER.BASE_LR 8e-4 MODEL.PRE_TRAIN baseline/RCAN_x2_baseline/model/model_latest.pth.tar
Please change the checkpoint to your pretrained model path. Let me know if you have additional questions!
thanks
hello, could you give me a example for fine-tune model with large-patch..