Closed Glooring closed 2 years ago
Hello,
Yes, it seems so, you can simply use LSTM instead of CuDNNLSTM everywhere. These two layers are basically the same, just CuDNN is faster to train compared to the regular TensorFlow kernel.
From TensorFlow 2.0 keras.layers.CuDNNLSTM layer has been deprecated, and LSTM layer will automatically use the CuDNN kernel by default when NVIDIA GPU is available (and CuDNN is installed). I found this information there: https://keras.io/guides/working_with_rnns/#performance-optimization-and-cudnn-kernels
Thank you very much!
I'm running your code in '2 Training, Validation, Testing.ipynb', but I cannot import the CuDNNLSTM.
After I run this:
I get this error:
Should I use LSTM instead of CuDNNLSTM? Thank you!