YeeHoran / Personality-disentangled-FER

1 stars 0 forks source link

cuda lib can't be used problem #1

Open YeeHoran opened 1 year ago

YeeHoran commented 1 year ago

I installed DeepFace on ubuntu according to 'https://github.com/serengil/deepface', and I carried out commands:

However, when I run the example code: from deepface import DeepFace result = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg") runfile('/home/yihuo/Downloads/1.DeepFaceTest/1. first test as webpage shows-20230128.py', wdir='/home/yihuo/Downloads/1.DeepFaceTest') 2023-02-12 18:54:55.786533: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F AVX512_VNNI FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-02-12 18:54:56.008342: I tensorflow/core/util/port.cc:104] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS=0. 2023-02-12 18:54:56.067972: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/yihuo/.local/lib/python3.9/site-packages/cv2/../../lib64:/usr/local/cuda-11.4/lib64 2023-02-12 18:54:56.067988: I tensorflow/compiler/xla/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. 2023-02-12 18:54:56.956543: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/yihuo/.local/lib/python3.9/site-packages/cv2/../../lib64:/usr/local/cuda-11.4/lib64 2023-02-12 18:54:56.956595: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/yihuo/.local/lib/python3.9/site-packages/cv2/../../lib64:/usr/local/cuda-11.4/lib64 2023-02-12 18:54:56.956600: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly. 2023-02-12 18:54:59.284805: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/yihuo/.local/lib/python3.9/site-packages/cv2/../../lib64:/usr/local/cuda-11.4/lib64 2023-02-12 18:54:59.285391: W tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:265] failed call to cuInit: UNKNOWN ERROR (303) 2023-02-12 18:54:59.285406: I tensorflow/compiler/xla/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (yi): /proc/driver/nvidia/version does not exist 2023-02-12 18:54:59.286351: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F AVX512_VNNI FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.ollowing error indication that cuda lib can't be used:

So, how to handle it, please?

Thank you!

YeeHoran commented 1 year ago

Then, I followed some guide from Internet. Then I test whether GPU is available by the following code:

import tensorflow as tf a=tf.config.list_physical_devices('GPU') b=tf.test.is_gpu_available(cuda_only=False, min_cuda_compute_capability=None) Then the console panel outputs: a=True, b=[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]

So I think cuda is now available.