Open LankyPoet opened 3 months ago
Yes, same issue. Using a python 3.10.6 virtual environment
check this reply
Thank you. Not a bad workaround to get going, but I agree with you, I am really hoping we keep seeing updated CUDA builds. New models come out constantly so it's important to stay up on llama.cpp versions.
This is how i got it to work. Took me a day to figure out.
https://github.com/abetlen/llama-cpp-python/issues/1352#issuecomment-2189890596
For me it was copying all the files from CUDA VS Integrations
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.2\extras\visual_studio_integration\MSBuildExtensions
To all the folders
BuildCustomizations
on the drive where I have my Visual Studio installed.
I had folders in both Program Files
and in Program Files (x86)
Note that the build process can take like 20 minutes.
There's also a table in docs saying where the files would go https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html#sample-projects and how to make the visible from the Visual Studio level https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html#build-customizations-for-existing-projects
Hi, I am running Windows 11, Python 3.11.9, and comfyui in a venv environment. I tried installing the latest llama-cpp-python for Cuda 1.24 in the below manner and received a string of errors. Can anyone assist please?