Open shiqi1994 opened 5 years ago
You can change the value of split_batch
in json file for training on one GPU.
Could you please give me more details about how to train on a single GPU? I have already tried to change split_batch to 2,3... , but it still takes up all of my GPUs while training....
Could you please give me more details about how to train on a single GPU? I have already tried to change split_batch to 2,3... , but it still takes up all of my GPUs while training....
Did you try do set gpu_ids:[0] and set split_batch to 1?
Could you please give me more details about how to train on a single GPU? I have already tried to change split_batch to 2,3... , but it still takes up all of my GPUs while training....
Did you try do set gpu_ids:[0] and set split_batch to 1?
Yes, I did try this configuration... I also set gpu_ids:[0,1] and split_batch to be 1, but it still uses my all three GPUs...
@ShunLiu1992 在networks / init.py @里第134行修改为:
if torch.cuda.is_available():
#net = nn.DataParallel(net).cuda()
net = net.cuda()
return net
I modify the parameter in the file named 'train_SRFBN_example.json':
"gpu_ids": [0],
but when I start training process, it use my all GPUs. How can I deal with this? Your early reply will be appreciated. :)