Open lakomi opened 2 years ago
Hello,
Hello,
- As I wrote in the readme, the version is: tf 1.14.0. I can't recall the exact versions of cuDNN and CUDA I used when I ran the experiments. However, this table lists everything you need to know about tf's compatibility.
- Sorry, there is no PyTorch code available and, at the moment, I'm not plaining to write a tf2 version for this code.
Hello, have you ever had this problem?This problem occurs when I execute “python train.py CONF/arch1.gin”。 tensorflow 1.15.0。my tensorflow_hub is 0.11.0,but it requires tensorflow >=1.15.0。
Traceback (most recent call last): File "/home/qss/miniconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1365, in _do_call return fn(*args) File "/home/qss/miniconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1350, in _run_fn target_list, run_metadata) File "/home/qss/miniconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1443, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.InternalError: 2 root error(s) found. (0) Internal: Blas GEMM launch failed : a.shape=(64, 128), b.shape=(128, 1280), m=64, n=1280, k=128 [[{{node arch1.gin/G/dense/MatMul}}]] (1) Internal: Blas GEMM launch failed : a.shape=(64, 128), b.shape=(128, 1280), m=64, n=1280, k=128 [[{{node arch1.gin/G/dense/MatMul}}]] [[arch1.gin/add_2/_25]] 0 successful operations. 0 derived errors ignored.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "train.py", line 132, in
Original stack trace for 'arch1.gin/G/dense/MatMul':
File "train.py", line 132, in
I have two problem. 1、What is your version of TensorFlow、CUDA and cudnn? 2、 Do you have code with PyTorch? Or, Can you provide the code for TensorFlow2.x?