Open marcnol opened 3 weeks ago
apparently by doing
$ export CUDA_VISIBLE_DEVICES=0
or
$ export CUDA_VISIBLE_DEVICES=1
and then tf.device("/gpu:0") inside the python code one should direct different jobs to different GPUs.
tf.device("/gpu:0")
Alternatively:
export CUDA_VISIBLE_DEVICES=$(nvidia-smi --query-gpu=memory.free,index --format=csv,nounits,noheader | sort -nr | head -1 | awk '{ print $NF }')
apparently by doing
$ export CUDA_VISIBLE_DEVICES=0
or
$ export CUDA_VISIBLE_DEVICES=1
and then
tf.device("/gpu:0")
inside the python code one should direct different jobs to different GPUs.Alternatively:
export CUDA_VISIBLE_DEVICES=$(nvidia-smi --query-gpu=memory.free,index --format=csv,nounits,noheader | sort -nr | head -1 | awk '{ print $NF }')