Open haochange opened 5 years ago
- As shown in this repos, small batchsize (e.g., 32) can also achieve competitive results. One GPU with 5G may be enough.
- If you want to train using multiple GPUs, you can remove the codes about GPU in training code and use CUDE_VISIBLE_DEVICES=1,2,4 (1,2,4 denote you GPU IDs) before the training command. For example, the name of your code is train.py, you can use the command: CUDE_VISIBLE_DEVICES=1,2,4 train.py --train_all --name xxx...
I already remove the code abt the GPU in the training code,but it didn't work,still run on one GPU,do you have any suggestion?
- As shown in this repos, small batchsize (e.g., 32) can also achieve competitive results. One GPU with 5G may be enough.
- If you want to train using multiple GPUs, you can remove the codes about GPU in training code and use CUDE_VISIBLE_DEVICES=1,2,4 (1,2,4 denote you GPU IDs) before the training command. For example, the name of your code is train.py, you can use the command: CUDE_VISIBLE_DEVICES=1,2,4 train.py --train_all --name xxx...
I already remove the code abt the GPU in the training code,but it didn't work,still run on one GPU,do you have any suggestion?
Could you please report the error log?
1) As shown in this repos, small batchsize (e.g., 32) can also achieve competitive results. One GPU with 5G may be enough.
2) If you want to train using multiple GPUs, you can remove the codes about GPU in training code and use CUDE_VISIBLE_DEVICES=1,2,4 (1,2,4 denote you GPU IDs) before the training command. For example, the name of your code is train.py, you can use the command: CUDE_VISIBLE_DEVICES=1,2,4 train.py --train_all --name xxx...