Open ShenZheng2000 opened 1 year ago
Hi,
Hope it is not too late. I have adapted the code, so now it supports Multi-GPU training. You can run the method with
python train.py --dataroot $dataset_path--name $model_name--gpu 0,1,2,3 --batch_size 4
You can also try --var_all
or not. The first computes the loss across images, while it computes loss within one image by default.
Best
Hello, authors! Thanks for your excellent work.
I have trouble with multi-GPU training. My command line looks like this:
And the error is below:
I print the values below for debugging.
which gives me
Since
len(log_prob_a)
is 0, we get an empty list for log_prob_a in multi-GPU training.Do you encounter this issue when training your models? How to solve this issue?