instructlab / instructlab

InstructLab Command-Line Interface. Use this to chat with a model and execute the InstructLab workflow to train a model using custom taxonomy data.
https://instructlab.ai
Apache License 2.0
766 stars 290 forks source link

Evaluate expects safetensors but is passed GGUF by default #1792

Open nathan-weinberg opened 1 month ago

nathan-weinberg commented 1 month ago

Describe the bug Evaluation only works on safetensors at the moment for MMLU and MMLUBranch, but we are passing a GGUF file by default and there is no error handling for it

To Reproduce Steps to reproduce the behavior:

  1. Run ilab config init with no changes to the defaults
  2. Run ilab model evaluate --benchmark mmlu

Expected behavior Command should run as expected

Device Info (please complete the following information):

sys.version: 3.11.7 (main, May 16 2024, 00:00:00) [GCC 11.4.1 20231218 (Red Hat 11.4.1-3)]
sys.platform: linux
os.name: posix
platform.release: 5.14.0-427.26.1.el9_4.x86_64
platform.machine: x86_64
os-release.ID: rhel
os-release.VERSION_ID: 9.4
os-release.PRETTY_NAME: Red Hat Enterprise Linux 9.4 (Plow)
instructlab.version: 0.18.0a3.dev10
instructlab-dolomite.version: 0.1.1
instructlab-eval.version: 0.1.0
instructlab-quantize.version: 0.1.0
instructlab-schema.version: 0.2.0
instructlab-sdg.version: 0.1.2
instructlab-training.version: 0.2.0
torch.version: 2.3.1+cu121
torch.backends.cpu.capability: AVX2
torch.version.cuda: 12.1
torch.version.hip: None
torch.cuda.available: True
torch.backends.cuda.is_built: True
torch.backends.mps.is_built: False
torch.backends.mps.is_available: False
torch.cuda.bf16: True
torch.cuda.current: 0
torch.cuda.0.name: NVIDIA A10G
torch.cuda.0.free: 21.7
torch.cuda.0.total: 22.0
torch.cuda.0.capability: 8.6
llama_cpp_python.version: 0.2.79
llama_cpp_python.supports_gpu_offload: False
nathan-weinberg commented 1 month ago

According to @alimaredia this will be resolved by https://github.com/instructlab/eval/issues/50 which should add GGUF support for all benchmarks

nathan-weinberg commented 1 month ago

Additionally, I've noticed that the default is a path to a GGUF file, but it seems Evaluation is expecting either a local directory or a HuggingFace repo

danmcp commented 2 weeks ago

Should be fixed with https://github.com/instructlab/instructlab/pull/2065 which enables external serving for mmlu