Paper99 / SRFBN_CVPR19

Pytorch code for our paper "Feedback Network for Image Super-Resolution" (CVPR2019)
MIT License
551 stars 126 forks source link

How to use single GPU to train SRFBN? #20

Open shiqi1994 opened 5 years ago

shiqi1994 commented 5 years ago

I modify the parameter in the file named 'train_SRFBN_example.json': "gpu_ids": [0], but when I start training process, it use my all GPUs. How can I deal with this? Your early reply will be appreciated. :)

Paper99 commented 5 years ago

You can change the value of split_batch in json file for training on one GPU.

ShunLiu1992 commented 5 years ago

Could you please give me more details about how to train on a single GPU? I have already tried to change split_batch to 2,3... , but it still takes up all of my GPUs while training....

shiqi1994 commented 5 years ago

Could you please give me more details about how to train on a single GPU? I have already tried to change split_batch to 2,3... , but it still takes up all of my GPUs while training....

Did you try do set gpu_ids:[0] and set split_batch to 1?

ShunLiu1992 commented 5 years ago

Could you please give me more details about how to train on a single GPU? I have already tried to change split_batch to 2,3... , but it still takes up all of my GPUs while training....

Did you try do set gpu_ids:[0] and set split_batch to 1?

Yes, I did try this configuration... I also set gpu_ids:[0,1] and split_batch to be 1, but it still uses my all three GPUs...

yichuan123 commented 4 years ago

@ShunLiu1992 在networks / init.py @里第134行修改为:

 if torch.cuda.is_available():
        #net = nn.DataParallel(net).cuda()
        net = net.cuda()

  return net