Closed jllllll closed 1 year ago
Your fork requires minimum compute of 6.0. Will install from main cuda repo if fork not supported.
I wasn't aware of that, well done.
I have merged the new requirements.txt into main and will now merge this update as well.
Your fork requires minimum compute of 6.0. Will install from main cuda repo if fork not supported.
Also removed cuBLAS llama-cpp-python installation in preparation for https://github.com/oobabooga/text-generation-webui/commit/4b19b74e6c8d9c99634e16774d3ebcb618ba7a18