Compare the performance of different LLM that can be deployed locally on consumer hardware. Run yourself with Colab WebUI.
979
stars
142
forks
source link
CUDA driver version is insufficient for CUDA runtime version #12
Closed
LOKSTED closed 9 months ago
This only happen with the WizardLM-1.0-Uncensored-Llama2-13B on Google colab Thank you for taking a look