janhq / models

Models support in Jan and Cortex
MIT License
6 stars 2 forks source link

epic: Automated Testing for Built-in Models #56

Closed Van-QA closed 1 week ago

Van-QA commented 3 months ago

Resources

Original Post

Problem
To avoid manually test the end-to-end functionality of various models in the Hugging Face Cortex Hub. This process is time-consuming and prone to human error, leading to inconsistencies in testing results.

Success Criteria
I want to have an automated end-to-end testing framework set up for the most common models in the Hugging Face Cortex Hub. This framework should automatically run tests for the following models:

The tests should be executed either

on weekends or whenever there is a new release of the LlamaCPP version.

The results should be easily accessible and provide clear feedback on the models' performance and functionality.

Additional Context
Automating the testing process will not only save time but also ensure that any changes or updates to the models do not break existing functionality. It would be beneficial to integrate this testing with CI/CD pipelines to ensure that any new model versions are automatically tested before deployment.

hiento09 commented 4 weeks ago

List current models and quantization

llama3.2:3b-gguf-q8-0
llama3.2:3b-gguf-q6-k
llama3.2:3b-gguf-q5-km
llama3.2:3b-gguf-q5-ks
llama3.2:3b-gguf-q4-km
llama3.2:3b-gguf-q4-ks
llama3.2:3b-gguf-q3-kl
llama3.2:3b-gguf-q3-km
llama3.2:3b-gguf-q3-ks
llama3.2:3b-gguf-q2-k
llama3.1:gguf
llama3.1:8b-gguf
llama3.1:8b-gguf-q8-0
llama3.1:8b-gguf-q6-k
llama3.1:8b-gguf-q5-km
llama3.1:8b-gguf-q5-ks
llama3.1:8b-gguf-q4-km
llama3.1:8b-gguf-q4-ks
llama3.1:8b-gguf-q3-kl
llama3.1:8b-gguf-q3-km
llama3.1:8b-gguf-q3-ks
llama3.1:8b-gguf-q2-k
llama3.1:8b-onnx
llama3.1:onnx
tinyllama:gguf
tinyllama:1b-gguf
tinyllama:1b-gguf-q8-0
tinyllama:1b-gguf-q6-k
tinyllama:1b-gguf-q5-km
tinyllama:1b-gguf-q5-ks
tinyllama:1b-gguf-q4-km
tinyllama:1b-gguf-q4-ks
tinyllama:1b-gguf-q3-kl
tinyllama:1b-gguf-q3-km
tinyllama:1b-gguf-q3-ks
tinyllama:1b-gguf-q2-k
llama3:8b-gguf-q8-0
llama3:8b-gguf-q6-k
llama3:8b-gguf-q5-km
llama3:8b-gguf-q5-ks
llama3:8b-gguf-q4-km
llama3:8b-gguf-q4-ks
llama3:8b-gguf-q3-kl
llama3:8b-gguf-q3-km
llama3:8b-gguf-q3-ks
llama3:8b-gguf-q2-k
llama3:gguf
llama3:8b-gguf
llama3:onnx
llama3:tensorrt-llm-linux-ampere
llama3:tensorrt-llm-linux-ada
llama3:8b-tensorrt-llm-linux-ampere
llama3:8b-tensorrt-llm-linux-ada
llama3:tensorrt-llm-windows-ampere
llama3:tensorrt-llm-windows-ada
llama3:8b-tensorrt-llm-windows-ampere
llama3:8b-tensorrt-llm-windows-ada
phi3:mini-gguf
phi3:medium
phi3:mini-gguf-q8-0
phi3:mini-gguf-q6-k
phi3:mini-gguf-q5-km
phi3:medium-gguf
phi3:mini-gguf-q5-ks
phi3:mini-gguf-q4-km
phi3:mini-gguf-q4-ks
phi3:mini-gguf-q3-kl
phi3:mini-gguf-q3-km
phi3:mini-gguf-q3-ks
phi3:mini-gguf-q2-k
phi3:gguf
phi3:medium-onnx
phi3:mini-onnx
phi3:onnx
gemma2:gguf
gemma2:2b-gguf
gemma2:2b-onnx
gemma2:onnx
gemma:gguf
gemma:7b-gguf
gemma:onnx
gemma:7b-onnx
mistral:small-gguf-q8-0
mistral:small-gguf-q6-k
mistral:small-gguf-q5-km
mistral:small-gguf-q5-ks
mistral:small-gguf-q4-km
mistral:small-gguf-q4-ks
mistral:small-gguf-q3-kl
mistral:small-gguf-q3-km
mistral:small-gguf-q3-ks
mistral:small-gguf-q2-k
mistral:7b-v0.3-gguf-q8-0
mistral:7b-v0.3-gguf-q6-k
mistral:7b-v0.3-gguf-q5-km
mistral:7b-v0.3-gguf-q5-ks
mistral:7b-v0.3-gguf-q4-km
mistral:7b-v0.3-gguf-q4-ks
mistral:7b-v0.3-gguf-q3-kl
mistral:7b-v0.3-gguf-q3-km
mistral:7b-v0.3-gguf-q3-ks
mistral:7b-v0.3-gguf-q2-k
mistral:gguf
mistral:7b-gguf
mistral:7b-tensorrt-llm-linux-ada
mistral:tensorrt-llm-linux-ada
mistral:7b-tensorrt-llm-linux-ampere
mistral:tensorrt-llm-linux-ampere
mistral:7b-tensorrt-llm-windows-ada
mistral:7b-tensorrt-llm-windows-ampere
mistral:tensorrt-llm-windows-ampere
mistral:tensorrt-llm-windows-ada
mistral:onnx
mistral:7b-onnx
mistral-nemo:12b-gguf-q8-0
mistral-nemo:12b-gguf-q6-k
mistral-nemo:12b-gguf-q5-km
mistral-nemo:12b-gguf-q5-ks
mistral-nemo:12b-gguf-q4-km
mistral-nemo:12b-gguf-q4-ks
mistral-nemo:12b-gguf-q3-kl
mistral-nemo:12b-gguf-q3-km
mistral-nemo:12b-gguf-q3-ks
mistral-nemo:12b-gguf-q2-k
qwen2:gguf
qwen2:7b-gguf
codestral:gguf
codestral:22b-gguf
openhermes-2.5:gguf
openhermes-2.5:7b-gguf
openhermes-2.5:7b-tensorrt-llm-linux-ada
openhermes-2.5:tensorrt-llm-linux-ada
openhermes-2.5:tensorrt-llm-linux-ampere
openhermes-2.5:7b-tensorrt-llm-linux-ampere
openhermes-2.5:tensorrt-llm-windows-ampere
openhermes-2.5:tensorrt-llm-windows-ada
openhermes-2.5:7b-tensorrt-llm-windows-ampere
openhermes-2.5:7b-tensorrt-llm-windows-ada
openhermes-2.5:onnx
openhermes-2.5:7b-onnx
aya:gguf
aya:12.9b-gguf
yi-1.5:gguf
yi-1.5:34B-gguf
mixtral:gguf
mixtral:7x8b-gguf
command-r:gguf
command-r:35b-gguf
hiento09 commented 4 weeks ago

I will write test for default :gguf quantization since now we don't have enough resource to test all 148 model quantizations cc @dan-homebrew @gabrielle-ong @0xSage @vansangpfiev @nguyenhoangthuan99 @namchuai