Open Huzhen757 opened 3 years ago
You can specify GPU ids with CUDA_VISIBLE_DEVICES
. For example CUDA_VISIBLE_DEVICES=4,5,6,7 pods_train --num-gpus 4
, it will use the last 4 GPUs for training. You may need to adjust the warmup iterations and warmup factor when you use fewer GPUs for training.
I added statements:os.environ['CUDA_VISIBLE_DEVICES'] = '0, 1' in the train_net script。When performing trainnet script training, Report an error: Default process group is not initialized How to solve it? And the default batch size is 4, I use two 3090 and the memory is 24G to train, how to modify the size of the batch size?
oh,I know that I need to modify the IMS_PER_BATCH and IMS_PER_DEVICE parameter in the config script to change its batch_size. But, for the training of two 3090 graphics cards, I will change WARMUP_FACTOR and WARMUP_ITERS parameters should be ?
When you use two GPUs, the error Default process group is not initialized
should not show up.
For changing the WARMUP_FACTOR
and WARMUP_ITERS
:
WARMUP_ITERS = 1500 * 8 / NUM_GPUS
WARMUP_FACTOR = 1. / WARMUP_ITERS
I have now modified the corresponding parameters in the config script, but run train_ net script still reports an error: Default process group is not initialized
Traceback (most recent call last):
File "train_net.py", line 106, in
Could you provide more details about your command for training?
I am using the train_net script under tools folder for training, Some parameters in the config script are adjusted, including IMS_PER_BATCH, IMS_PER_DEVICE, WARMUP_FACTOR and WARMUP_ITERS parameters。And add extra statement in the train_net script : os.environ['CUDA_VISIBLE_DEVICES'] = '0, 1'. And update the path of Dataset in the base_dataset script. Other default parameters and hyper-paramters dont change.
You need to add --num-gpus
to your command when you train with yolof.
BTW, we recommend using pods_train
as given in README.
Now there is a new error in the 'dist URL' parameter: cvpods.engine.launch ERROR: Process group URL: tcp://127.0.0.1:50147 RuntimeError: Address already in use
ai...Your code actually is too hard to run。。。。
Why not just follow the steps in README. It should work well.
Using the method in REDEME to train, it can only modify the number of GPUs, but it definitely can't update the identifier of GPU to train at all.
It can.... I give an exmaple above.
You can specify GPU ids with
CUDA_VISIBLE_DEVICES
. For exampleCUDA_VISIBLE_DEVICES=4,5,6,7 pods_train --num-gpus 4
, it will use the last 4 GPUs for training. You may need to adjust the warmup iterations and warmup factor when you use fewer GPUs for training.
Ok,I konw. Take 2 GPUs for training , it still report error : assert base_world_size == 8, "IMS_PER_BATCH/DEVICE in config file is used for 8 GPUs" AssertionError: IMS_PER_BATCH/DEVICE in config file is used for 8 GPUs
The number of GPUs required by your code is too large. My team only has 4 GPUs per machine,I don't think I can train.....ai....
I useing 4 GPUs for training with the way you provided, like this: CUDA_VISIBLE_DEVICES=0,1,2,3 pods_train --num-gpus 4
But it still report a error : RuntimeError: NCCL error in: /opt/conda/conda-bld/pytorch_1607370156314/work/torch/lib/c10d/ProcessGroupNCCL.cpp:784, invalid usage, NCCL version 2.7.8
How could I solve it ? Thanks !
Many reasons can produce this error. You can refer to this solution and have a try.
OK,I trying to see if I could work it out. Thanks !
这个代码太难跑了
这个代码太难跑了
是的,很难跑,他是与基于cvpods库实现的, 需要安装这个库然后编译这个库,然后在源码中还要编译。而且最少要四张卡才能跑,非常吃显卡。。。之前我试了4张2080ti跑,结果还是报错,也就是上面个的error。难定,不想train这个代码了,其实这篇论文的encoder部分倒是可以学习的,其他的地方我懒得花时间了。。还得跑自己的实验,唉。。。
Hello, I want to use the under the tools folder 'trainnet' script to train the yolof-res101-dc5-1x version of the network, but because the first card of my group's server is occupied by others, I want to use other cards to train, I did not find the statement to modify the GPU number in 'setup' script. so I put num gpu,num machines and machines rank parameters are all changed to 1, but they are still trained with GPU: 0. How to solve it?
Thanks !