Hi guys, thank you for sharing this fantastic project! I'm using AWS g3 instance for training with 2 x Tesla M60 cards, looks like the training is not utilizing both cards at the same time.
It makes total sense, since the code is written for a single GPU. Setting more than one visible device with CUDA_VISIBLE_DEVICES will not affect anything.
Hi guys, thank you for sharing this fantastic project! I'm using AWS g3 instance for training with 2 x Tesla M60 cards, looks like the training is not utilizing both cards at the same time.
Arguments:
CUDA_VISIBLE_DEVICES=0,1 python main.py --model_name=model_van-gogh --phase=inference --image_size=1280 --ii_dir=./samples/ --save_dir=./output/
One of the cards is 100% utilized while the other one remains unused.