Closed jia2lin3yuan1 closed 7 years ago
You can put the following inside your train.py, anywhere before the GPU is detected by tensorflow. This number means the GPUs you wanna use in this program.
os.environ['CUDA_VISIBLE_DEVICES'] = '1'
Thank you very much for the suggestion. It works for me.
By using 'with tf.device('/gpu:1')', it would be possible to run inference.py on GPU1. But failed to run training.py on GPU1. Are there any solutions for it? Thanks!
Following is what I did for the implementation in inference.py. (The format of the first two lines supposed to be correct. I don't know why it's not correct while display here. May also need help for this format problem.)
`... with tf.device('/gpu:1'): net = DeepLabResNetModel({'data': tf.expand_dims(img, dim=0)}, is_training=False)
...`
and for the initialization of 'config', add of
config.allow_soft_placement = True