Wanggcong / Spatial-Temporal-Re-identification

[AAAI 2019] Spatial Temporal Re-identification
MIT License
384 stars 77 forks source link

Doesn't seem to be able to train using multiple GPUs. #2

Open haochange opened 5 years ago

Wanggcong commented 5 years ago

1) As shown in this repos, small batchsize (e.g., 32) can also achieve competitive results. One GPU with 5G may be enough.

2) If you want to train using multiple GPUs, you can remove the codes about GPU in training code and use CUDE_VISIBLE_DEVICES=1,2,4 (1,2,4 denote you GPU IDs) before the training command. For example, the name of your code is train.py, you can use the command: CUDE_VISIBLE_DEVICES=1,2,4 train.py --train_all --name xxx...

hensonwells commented 5 years ago
  1. As shown in this repos, small batchsize (e.g., 32) can also achieve competitive results. One GPU with 5G may be enough.
  2. If you want to train using multiple GPUs, you can remove the codes about GPU in training code and use CUDE_VISIBLE_DEVICES=1,2,4 (1,2,4 denote you GPU IDs) before the training command. For example, the name of your code is train.py, you can use the command: CUDE_VISIBLE_DEVICES=1,2,4 train.py --train_all --name xxx...

I already remove the code abt the GPU in the training code,but it didn't work,still run on one GPU,do you have any suggestion?

Wanggcong commented 5 years ago
  1. As shown in this repos, small batchsize (e.g., 32) can also achieve competitive results. One GPU with 5G may be enough.
  2. If you want to train using multiple GPUs, you can remove the codes about GPU in training code and use CUDE_VISIBLE_DEVICES=1,2,4 (1,2,4 denote you GPU IDs) before the training command. For example, the name of your code is train.py, you can use the command: CUDE_VISIBLE_DEVICES=1,2,4 train.py --train_all --name xxx...

I already remove the code abt the GPU in the training code,but it didn't work,still run on one GPU,do you have any suggestion?

Could you please report the error log?