DrSleep / tensorflow-deeplab-resnet

DeepLab-ResNet rebuilt in TensorFlow
MIT License
1.25k stars 429 forks source link

Running 'training.py' on GPU1 #64

Closed jia2lin3yuan1 closed 7 years ago

jia2lin3yuan1 commented 7 years ago

By using 'with tf.device('/gpu:1')', it would be possible to run inference.py on GPU1. But failed to run training.py on GPU1. Are there any solutions for it? Thanks!

Following is what I did for the implementation in inference.py. (The format of the first two lines supposed to be correct. I don't know why it's not correct while display here. May also need help for this format problem.)

`... with tf.device('/gpu:1'): net = DeepLabResNetModel({'data': tf.expand_dims(img, dim=0)}, is_training=False)

      # Which variables to load.     
      restore_var = tf.global_variables()    

      # Predictions.     
      raw_output = net.layers['fc1_voc12']    
      raw_output_up = tf.image.resize_bilinear(raw_output, tf.shape(img)[0:2,])    
      raw_output_up = tf.argmax(raw_output_up, dimension=3)    
      pred = tf.expand_dims(raw_output_up, dim=3)    

      # Predictions of direction.     
      raw_direct = net.layers['fc1_dir']    
      raw_direct_up = tf.image.resize_bilinear(raw_direct, tf.shape(img)[0:2,])    
      raw_direct_up = tf.argmax(raw_direct_up, dimension=3)    
      pred_dir = tf.expand_dims(raw_direct_up, dim=3) 

...`

and for the initialization of 'config', add of config.allow_soft_placement = True

zhixy commented 7 years ago

You can put the following inside your train.py, anywhere before the GPU is detected by tensorflow. This number means the GPUs you wanna use in this program. os.environ['CUDA_VISIBLE_DEVICES'] = '1'

jia2lin3yuan1 commented 7 years ago

Thank you very much for the suggestion. It works for me.