Closed mahmoodul closed 2 years ago
Do you have a CUDA-capable GPU? This is required for running the code.
Do you have a CUDA-capable GPU? This is required for running the code.
Hi, jpsml Thanks so much for your reply. Yes, I have a CUDA-capable GPU. Here I verified it again by below command: lspci | grep -i nvidia Here is results: 01:00.0 VGA compatible controller: NVIDIA Corporation GP102 [GeForce GTX 1080 Ti] (rev a1) 01:00.1 Audio device: NVIDIA Corporation GP102 HDMI Audio Controller (rev a1)
Would you please help me to find why I am facing this issue I will be thankful to you
Here you can driver is also installed
Hi, jpsml Thanks so much for your reply. I solved that problem.
Solved the problem
I am facing following error when I run test.py as python3 test.py oxiod 6dofio_oxiod.hdf5 "/home/aisl/AI/6-DOF-Inertial-Odometry-master/Oxford Inertial Odometry Dataset/handheld/data3/syn/imu1.csv" "/home/aisl/AI/6-DOF-Inertial-Odometry-master/Oxford Inertial Odometry Dataset/handheld/data3/syn/vi1.csv" 2021-11-30 15:46:23.512258: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/aisl/catkin_ws/devel/lib:/opt/ros/noetic/lib 2021-11-30 15:46:23.512297: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. /home/aisl/.local/lib/python3.8/site-packages/quaternion/numba_wrapper.py:23: UserWarning:
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Could not import from numba, which means that some parts of this code may run MUCH more slowly. You may wish to install numba. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
warnings.warn(warning_text) 2021-11-30 15:46:24.556396: E tensorflow/stream_executor/cuda/cuda_driver.cc:271] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected 2021-11-30 15:46:24.556433: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (Z370M-S01): /proc/driver/nvidia/version does not exist 2021-11-30 15:46:24.556594: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. WARNING:tensorflow:No training configuration found in the save file, so the model was not compiled. Compile it manually. Traceback (most recent call last): File "test.py", line 69, in
main()
File "test.py", line 34, in main
[yhat_delta_p, yhat_delta_q] = model.predict([x_gyro[0:200, :, :], x_acc[0:200, :, :]], batch_size=1, verbose=1)
File "/home/aisl/.local/lib/python3.8/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/home/aisl/.local/lib/python3.8/site-packages/tensorflow/python/eager/execute.py", line 58, in quick_execute
tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.InvalidArgumentError: No OpKernel was registered to support Op 'CudnnRNNV2' used by {{node model_3/bidirectional_3/forward_cu_dnnlstm_3/CudnnRNNV2}} with these attrs: [dropout=0, seed=0, T=DT_FLOAT, input_mode="linear_input", direction="unidirectional", rnn_mode="lstm", is_training=true, seed2=0]
Registered devices: [CPU]
Registered kernels:
device='GPU'; T in [DT_DOUBLE]
device='GPU'; T in [DT_FLOAT]
device='GPU'; T in [DT_HALF]