issues
search
Troyanovsky
/
Local-LLM-Comparison-Colab-UI
Compare the performance of different LLM that can be deployed locally on consumer hardware. Run yourself with Colab WebUI.
979
stars
142
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
CUDA driver version is insufficient for CUDA runtime version
#12
LOKSTED
closed
9 months ago
1
Integrate with LiteLLM - Evaluate 100+LLMs, 92% faster
#11
ishaan-jaff
opened
12 months ago
0
ImportError: libcudart.so.12
#10
niknoproblems
opened
1 year ago
2
llava 13b
#9
ghost
opened
1 year ago
0
Error executing "wizard-vicuna-13B.ggml.q4_0 (using llama.cpp)" on Colab
#8
elBlacksmith
closed
1 year ago
1
What if the next LLM's know your questions?
#7
Trundicho
opened
1 year ago
0
Which runs on least powerful hardware...
#6
epugh
opened
1 year ago
1
Fix llama-cpp-version to 0.1.78 to support GGML3
#5
Ronbalt
closed
1 year ago
0
Update safetensors links for Coder models
#4
klipski
closed
1 year ago
0
make coding scores based on unit tests
#3
rmminusrslash
opened
1 year ago
0
Legal
#2
laurentopia
opened
1 year ago
0
Feature: add new model - Nous-Hermes-13b
#1
trompx
closed
1 year ago
1