yasenh / libtorch-yolov5

A LibTorch inference implementation of the yolov5
MIT License
372 stars 114 forks source link

Error in cmake building #33

Closed naserpiltan closed 3 years ago

naserpiltan commented 3 years ago

Hi @yasenh I installed all dependencies and did setup as you said in repo. But when i wanted to build it with cmake(cmake .. && make ) i got this error: Screenshot from 2020-12-29 05-30-36

Can you please tell me whats the problem ? Thank you

yasenh commented 3 years ago

Hi @naserpiltan, there are 2 versions of libtorch (Pre-cxx11 ABI and cxx11 ABI), and I am using cxx11 ABI one, and make sure you have version >= 1.6.0

naserpiltan commented 3 years ago

@yasenh Thank you for your answer. I use libtorch-cxx11-abi-shared-with-deps-1.7.1.zip package. I searched more in my error and think found the error source. You can see it in this image : Screenshot from 2020-12-29 17-24-00

I changed my default gcc compiler to gcc-8 , but it didn't help. Do you have any suggestions ?

yasenh commented 3 years ago

@naserpiltan whats your cmake version?

Actually you can try to comment out: https://github.com/yasenh/libtorch-yolov5/blob/master/include/detector.h#L8-L9

naserpiltan commented 3 years ago

@yasenh My cmake version is : 3.10.2

naserpiltan commented 3 years ago

@yasenh I added these two lines in my cmake file and my previous error has gone. set(CMAKE_CXX_STANDARD 14) set(CMAKE_CXX_STANDARD_REQUIRED ON)

But now i have an other problem Screenshot from 2020-12-29 17-59-33

yasenh commented 3 years ago

@naserpiltan Did you set up env variables after installing CUDA? https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#post-installation-actions

naserpiltan commented 3 years ago

@yasenh Yes i did them all when i was installing CUDA on my os . I added these two lines in my .bashrc : export PATH=/usr/local/cuda-10.2/bin${PATH:+:${PATH}} export LD_LIBRARY_PATH=/usr/local/cuda-10.2/lib64\${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}} Then did : source ~/.bashrc

I don't know why, but i don't have libcublas.so in /usr/local/cuda-10.2/lib64. I think this causes the error.

yasenh commented 3 years ago

Hi @naserpiltan You can serach by libcublas: $ sudo find /usr -name "libcublas*" I noticed that it is installed under /usr/lib/x86_64-linux-gnu/ on my computer.

FYI: https://forums.developer.nvidia.com/t/cuda-blas-libraries-not-installed/107908/12

zhiqwang commented 3 years ago

@yasenh My cmake version is : 3.10.2

Update to cmake 3.14+ ?

yasenh commented 3 years ago

@zhiqwang I have 3.5.1 locally and works well :)

zhiqwang commented 3 years ago

We can also use the libtorch from the installed pytorch, like this:

export TORCH_PATH=$(dirname $(python -c "import torch; print(torch.__file__)"))
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$TORCH_PATH/lib/

mkdir build && cd build
cmake .. -DTorch_DIR=$TORCH_PATH/share/cmake/Torch
naserpiltan commented 3 years ago

@yasenh @zhiqwang Thank you guys for your solutions. I updated my cmake version to 3.12.4 and build it successfully. Now i can get correct output from this code.

naserpiltan commented 3 years ago

@yasenh in a real application i would like to use this code in my app that i made it in Qt. I linked all include and lib files in my Qt app and there isn't any frustrating errors. But when i run my app , it crashes when it tries to load model in Detector class. In this part : Screenshot from 2020-12-30 06-01-56

It crashes without throwing any exceptions , thus i can't track actual error source. Do you have any idea about that ?

yasenh commented 3 years ago

@naserpiltan it crashes every time?

naserpiltan commented 3 years ago

@yasenh Thank you so much for your answers. I removed cxxopts.hpp from my project in Qt and it works good for CPU configuration . I exported yolov5s.pt for GPU configuration and made yolov5s.torchscript.pt . It works very good in your project with cmake and CUDA , but when i run it with CUDA in my own Qt project , it gives : Error loading the model!. I think it can not connect to CUDA toolkit , because when i run `cout<<torch::cuda::is_available()<<endl; it ruturns 0.

naserpiltan commented 3 years ago

@yasenh I added this single line in your code : cout<<e.what()<<endl; . Now it is like this :

Screenshot from 2020-12-30 18-58-08

In this case , It gives me this output. Screenshot from 2020-12-30 18-59-01

Do you know whats happening ?

yasenh commented 3 years ago

@naserpiltan Still looks like CUDA issue, make sure you get "torch::cuda::is_available()" as true

naserpiltan commented 3 years ago

@yasenh This command returns 0 : std::cout<<"torch::cuda::is_available()"<<std::endl;

yasenh commented 3 years ago

@naserpiltan You have to make sure your CUDA works for GPU version, maybe try to re-install you CUDA