issues
search
tonyctalope
/
gpu_poor
Calculate token/s & GPU memory requirement for any LLM. Supports llama.cpp/ggml/bnb/QLoRA quantization
https://rahulschand.github.io/gpu_poor/
0
stars
1
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Up to date GPT_POOR
#1
tonyctalope
opened
1 month ago
5