code-terminator / invariant_rationalization

Tensorflow implementation of Invariant Rationalization
MIT License
47 stars 5 forks source link

tensorflow version is wrong, errors in the code #3

Open ghost opened 4 years ago

ghost commented 4 years ago

Hi I am trying to run the codes with the specified versions. got this error. thanks for your help

WARNING:tensorflow: The TensorFlow contrib module will not be included in TensorFlow 2.0. For more information, please see:

Traceback (most recent call last): File "imdb_demo.py", line 162, in global_step = train_imdb(D_tr, inv_rnn, opts, global_step, args) File "/remote/svm/user.active/julia/dev/invariant_rationalization/train.py", line 30, in train_imdb inputs, masks, envs) File "/user/julia/libs/anaconda3/envs/irm/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py", line 679, in call outputs = self.call(inputs, *args, kwargs) File "/remote/svm/user.active/julia/dev/invariant_rationalization/model.py", line 130, in call gen_outputs = self.generator(gen_embeddings) File "/user/julia/libs/anaconda3/envs/irm/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py", line 679, in call outputs = self.call(inputs, args, kwargs) File "/remote/.svm/user.active/julia/dev/invariant_rationalization/model.py", line 59, in call h = self.rnn(x) File "/user/julia/libs/anaconda3/envs/irm/lib/python3.6/site-packages/tensorflow/python/keras/layers/wrappers.py", line 533, in call return super(Bidirectional, self).call(inputs, kwargs) File "/user/julia/libs/anaconda3/envs/irm/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py", line 679, in call outputs = self.call(inputs, args, kwargs) File "//user/julia/libs/anaconda3/envs/irm/lib/python3.6/site-packages/tensorflow/python/keras/layers/wrappers.py", line 633, in call y = self.forward_layer.call(inputs, **kwargs) File "/user/julia/libs/anaconda3/envs/irm/lib/python3.6/site-packages/tensorflow/python/keras/layers/cudnn_recurrent.py", line 110, in call output, states = self._process_batch(inputs, initial_state) File "/user/julia/libs/anaconda3/envs/irm/lib/python3.6/site-packages/tensorflow/python/keras/layers/cudnn_recurrent.py", line 302, in _process_batch rnn_mode='gru') File "/user/julia/libs/anaconda3/envs/irm/lib/python3.6/site-packages/tensorflow/python/ops/gen_cudnn_rnn_ops.py", line 109, in cudnn_rnn ctx=_ctx) File "/user/julia/libs/anaconda3/envs/irm/lib/python3.6/site-packages/tensorflow/python/ops/gen_cudnn_rnn_ops.py", line 197, in cudnn_rnn_eager_fallback attrs=_attrs, ctx=_ctx, name=name) File "/user/julia/libs/anaconda3/envs/irm/lib/python3.6/site-packages/tensorflow/python/eager/execute.py", line 67, in quick_execute six.raise_from(core._status_to_exception(e.code, message), None) File "", line 3, in raise_from tensorflow.python.framework.errors_impl.InternalError: Could not find valid device for node. Node: {{node CudnnRNN}} All kernels registered for op CudnnRNN : device='GPU'; T in [DT_DOUBLE] device='GPU'; T in [DT_FLOAT] device='GPU'; T in [DT_HALF] [Op:CudnnRNN]

code-terminator commented 4 years ago

Hi Julia,

It seems you the error comes from you do not have a GPU.

In terms of the error you mentioned in the email, please check your version of Tensorflow and system requirement in README.

srhthu commented 2 years ago

I have met the same issue. The reason is the corresponding version of cuda and cudnn is not installed. To solve it, just run: conda install cudatoolkit=10.0 conda install cudnn