Closed nkdsoft closed 5 years ago
Found a temporary solution! I added this code to convert.py:
import tensorflow as tf
from keras.backend.tensorflow_backend import set_session
config = tf.ConfigProto()
config.gpu_options.allow_growth = True # dynamically grow the memory used on the GPU
config.log_device_placement = True # to log device placement (on which device the operation ran)
# (nothing gets printed in Jupyter, only if you run it standalone)
sess = tf.Session(config=config)
set_session(sess) # set this TensorFlow session as the default session for Keras
tf 1.13 rc1 is out, so if you don't mind doing some testing, you could try with TF 1.13.
FWIW I run tf 1.12 with CUDA 10 and cuDNN 7.4.2 on a GTX 1080 and I have no issues. Do you mean RTX 2070?
tf 1.13 rc1 is out, so if you don't mind doing some testing, you could try with TF 1.13.
FWIW I run tf 1.12 with CUDA 10 and cuDNN 7.4.2 on a GTX 1080 and I have no issues. Do you mean RTX 2070?
I have the TF 1.13 nightly build in another environment (installed with pip) and I have similar problems when running TF sample programs that use convolution algorithm. I will try with 1.13 rc1 but I don't have much hope.
Yes, I meant RTX 2070. I think its a problem with the new architecture of all the RTX cards (I don't know if the problem is CUDA 10, or CUDNN or something... I read about this problem somewhere else and thats why I found a solution using the allow_growth = True, gpu option.
I am trying to run faceswap with TF v1.12 compiled with Cuda 10 & CUDNN 7.4.2. I am using a Geforce RTX 2070 GPU.
At first I was not able to train (cudnn error), but after some investigation I discovered that using the --ag flag (set_tf_allow_growth) it worked OK.
Now the problem is that I can not use the convert.py function. I get a similar error but this function does not have the --ag flag. This is the error i get:
looks like a Keras problem maybe... any ideas?