Open rookiemann opened 1 week ago
I revert back to this: pip install --force-reinstall --no-cache-dir llama-cpp-python==0.2.90 --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu121
It's working again, so it must be something in update I'm missing
I've looked through the issues, I do see there are some complaints about this, a lot of different ways to fix it working differently for different users.
Is there a unified, accepted and tested way that can readjust my set up to work with GPUs?
nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2023 NVIDIA Corporation Built on Wed_Feb__8_05:53:42_Coordinated_Universal_Time_2023 Cuda compilation tools, release 12.1, V12.1.66 Build cuda_12.1.r12.1/compiler.32415258_0