Open Karobben opened 1 year ago
Hey @Karobben
Error Message: CUDA error 100 at /var/tmp/pip-install-2walnqmv/llama-cpp-python_0209ea07c6904f1285137fa1276bae3d/vendor/llama.cpp/ggml-cuda.cu:5066: no CUDA-capable device is detected /arrow/cpp/src/arrow/filesystem/s3fs.cc:2829: arrow::fs::FinalizeS3 was not called even though S3 was initialized. This could lead to a segmentation fault at exit
Solution - Attempt to resolve the CUDA error by installing the necessary CUDA toolkit and PyTorch version: --> conda install -c pytorch torchvision cudatoolkit=10.1 pytorch
Hey @Karobben
Error Message: CUDA error 100 at /var/tmp/pip-install-2walnqmv/llama-cpp-python_0209ea07c6904f1285137fa1276bae3d/vendor/llama.cpp/ggml-cuda.cu:5066: no CUDA-capable device is detected /arrow/cpp/src/arrow/filesystem/s3fs.cc:2829: arrow::fs::FinalizeS3 was not called even though S3 was initialized. This could lead to a segmentation fault at exit
Solution - Attempt to resolve the CUDA error by installing the necessary CUDA toolkit and PyTorch version: --> conda install -c pytorch torchvision cudatoolkit=10.1 pytorch
Thanks for your quick response. So, do you mean that the number cuda 11.8 is not compatible with this model?
Thanks for your tool.
I have a problem when I run the TheBloke/Llama-2-70b-Chat-GGUF model. It loads well. But After I asked questions, it craped. Is it normal? I have dual 4090.
The error code is like:
GPU load is: