Closed Uzair1947 closed 3 years ago
ok thanks how can I shift the load to the CPU
import os os.environ["CUDA_VISIBLE_DEVICES"]=""
os.environ["CUDA_VISIBLE_DEVICES"]="-1"
I could be wrong but I think tensorflow by default allocates all available GPU memory.
This worked for me:
import tensorflow as tf
tf.config.set_visible_devices([], 'GPU')
gpus = tf.config.experimental.list_physical_devices('GPU')
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
I could be wrong but I think tensorflow by default allocates all available GPU memory.
This worked for me:
import tensorflow as tf tf.config.set_visible_devices([], 'GPU') gpus = tf.config.experimental.list_physical_devices('GPU') for gpu in gpus: tf.config.experimental.set_memory_growth(gpu, True)
Thanks for this, I've seen other projects where tensorflow allocates all available memory as well, however when running your code deepface ended up not using gpu at all. I made just a small tweak to your code to get deepface to use gpu but not allocate all available memory:
import tensorflow as tf
gpus = tf.config.experimental.list_physical_devices('GPU')
tf.config.set_visible_devices(gpus, 'GPU')
for gpu in gpus:
print(gpu)
tf.config.experimental.set_memory_growth(gpu, True)
Especially analyze method requires many complex models. unfortunately, nothing to do. I recommend you to use cpu.