Closed samos123 closed 11 months ago
Hittng an issue when trying out multstage build with venv:
CUDA error 304 at /tmp/pip-install-2buew0g7/llama-cpp-python_d94ee4c9feba4392a5a6259b67b5556f/vendor/llama.cpp/ggml-cuda.cu:505 6: OS call failed or operation not supported on this OS
Seems I'm hitting a bug with llama-cpp-python itself, continuing to troubleshoot and also filed an issue here: https://github.com/abetlen/llama-cpp-python/issues/645
Hittng an issue when trying out multstage build with venv: