Could not find device for node: {{node CudnnRNN}} = CudnnRNN[T=DT_FLOAT, direction="unidirectional", dropout=0, input_mode="linear_input", is_training=true, rnn_mode="lstm", seed=0, seed2=0]
All kernels registered for op CudnnRNN:
[Op:CudnnRNN]
Call arguments received by layer "lstm_6" " f"(type LSTM):
• inputs=tf.Tensor(shape=(32, 100, 600), dtype=float32)
• mask=None
• training=True
• initial_state=None
This code was working fine on another virtual-environment with tensorflow-gpu and not direct-ml.
This env works fine on another problem with image classification (the time/batch reduce from 22mn to 5 mn) and I see that the gpu is fully loaded. So the installation and the pluggin works fine.
But this env with direct-ml give me this issue.
I have : cuda version : 12.0 (nvidia-smi)
CUDNN version : 11.8 (nvcc --version)
Hello, I just build a model like this on tensorflow :
When I ".fit()" the model I get this error :
Could not find device for node: {{node CudnnRNN}} = CudnnRNN[T=DT_FLOAT, direction="unidirectional", dropout=0, input_mode="linear_input", is_training=true, rnn_mode="lstm", seed=0, seed2=0] All kernels registered for op CudnnRNN: