Open YJHMITWEB opened 1 year ago
You can try export the .so path into LD_LIBRARY_PATH
.
For int8_gemm_test
, it requires the pytorch path here https://github.com/NVIDIA/FasterTransformer/blob/main/tests/int8_gemm/CMakeLists.txt#L24.
Hi, have you fixed the error: Linking CXX executable ../../bin/int8_gemm_test /usr/bin/ld: cannot find -lmkl_intel_ilp64 /usr/bin/ld: cannot find -lmkl_core /usr/bin/ld: cannot find -lmkl_intel_thread?
Branch/Tag/Commit
main
Docker Image Version
none
GPU name
A100
CUDA Driver
525.60.13
Reproduced Steps
cmake -DSM=80 -DCMAKE_BUILD_TYPE=Release -DBUILD_PYT=ON ..
Here the Found Python
/usr/bin/python3.9
seems does not use the python in my conda environment.Then, make -j
Errors
I am wondering how
/home/FasterTransformer/src/fastertransformer/models/swin_int8/SwinINT8Weight.h:22:10: fatal error: cudnn.h: No such file or directory
happens as I have exported the cudnn path.Also, I am wondering how to solve
[ 67%] Linking CXX executable ../../bin/int8_gemm_test /usr/bin/ld: cannot find -lmkl_intel_ilp64 /usr/bin/ld: cannot find -lmkl_core /usr/bin/ld: cannot find -lmkl_intel_thread
Thanks!