instructlab / instructlab

InstructLab Command-Line Interface. Use this to chat with a model and execute the InstructLab workflow to train a model using custom taxonomy data.
https://instructlab.ai
Apache License 2.0
788 stars 298 forks source link

ilab cli 0.17.0 does not work with granite model family #1372

Closed gshipley closed 3 months ago

gshipley commented 3 months ago

Describe the bug With ilab v0.17.0 if you specify --model-family granite it throws an error. I was under the impression that the model family grouping was done to auto map granite to merlinite.

To Reproduce

ilab model serve --model-family granite --model-path models/granite-7b-lab-Q4_K_M.gguf
INFO 2024-06-15 14:25:31,252 serve.py:51: serve Using model 'models/granite-7b-lab-Q4_K_M.gguf' with -1 gpu-layers and 4096 max context size.
ERROR 2024-06-15 14:25:40,704 llama_cpp:0: flush Traceback (most recent call last):
  File "/Users/gshipley/demo/venv/bin/ilab", line 8, in <module>
    sys.exit(ilab())
ERROR 2024-06-15 14:25:40,705 llama_cpp:0: flush ^^^^^^
ERROR 2024-06-15 14:25:40,706 llama_cpp:0: flush File "/Users/gshipley/demo/venv/lib/python3.11/site-packages/click/core.py", line 1157, in __call__
    return self.main(*args, **kwargs)
ERROR 2024-06-15 14:25:40,706 llama_cpp:0: flush ^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 2024-06-15 14:25:40,706 llama_cpp:0: flush File "/Users/gshipley/demo/venv/lib/python3.11/site-packages/click/core.py", line 1078, in main
    rv = self.invoke(ctx)
ERROR 2024-06-15 14:25:40,706 llama_cpp:0: flush ^^^^^^^^^^^^^^^^
ERROR 2024-06-15 14:25:40,706 llama_cpp:0: flush File "/Users/gshipley/demo/venv/lib/python3.11/site-packages/click/core.py", line 1688, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
ERROR 2024-06-15 14:25:40,706 llama_cpp:0: flush ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 2024-06-15 14:25:40,706 llama_cpp:0: flush File "/Users/gshipley/demo/venv/lib/python3.11/site-packages/click/core.py", line 1688, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
ERROR 2024-06-15 14:25:40,706 llama_cpp:0: flush ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 2024-06-15 14:25:40,706 llama_cpp:0: flush File "/Users/gshipley/demo/venv/lib/python3.11/site-packages/click/core.py", line 1434, in invoke
    return ctx.invoke(self.callback, **ctx.params)
ERROR 2024-06-15 14:25:40,706 llama_cpp:0: flush ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 2024-06-15 14:25:40,706 llama_cpp:0: flush File "/Users/gshipley/demo/venv/lib/python3.11/site-packages/click/core.py", line 783, in invoke
    return __callback(*args, **kwargs)
ERROR 2024-06-15 14:25:40,706 llama_cpp:0: flush ^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 2024-06-15 14:25:40,707 llama_cpp:0: flush File "/Users/gshipley/demo/venv/lib/python3.11/site-packages/click/decorators.py", line 33, in new_func
    return f(get_current_context(), *args, **kwargs)
ERROR 2024-06-15 14:25:40,707 llama_cpp:0: flush ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 2024-06-15 14:25:40,707 llama_cpp:0: flush File "/Users/gshipley/demo/venv/lib/python3.11/site-packages/instructlab/model/serve.py", line 58, in serve
    server(
ERROR 2024-06-15 14:25:40,707 llama_cpp:0: flush File "/Users/gshipley/demo/venv/lib/python3.11/site-packages/instructlab/server.py", line 196, in server
    if template_dict["model"] == get_model_family(model_family, model_path):
ERROR 2024-06-15 14:25:40,707 llama_cpp:0: flush ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 2024-06-15 14:25:40,707 llama_cpp:0: flush File "/Users/gshipley/demo/venv/lib/python3.11/site-packages/instructlab/configuration.py", line 219, in get_model_family
    raise ConfigException("Unknown model family: %s" % forced)
ERROR 2024-06-15 14:25:40,707 llama_cpp:0: flush instructlab.configuration.ConfigException: Unknown model family: granite

Expected behavior --model-family granite should map to merlinite

Screenshots

Device Info (please complete the following information): instructlab.version: 0.17.0 sys.version: 3.11.9 (main, Apr 2 2024, 08:25:04) [Clang 15.0.0 (clang-1500.3.9.4)] sys.platform: darwin os.name: posix platform.release: 23.5.0 platform.machine: arm64 torch.version: 2.3.1 torch.backends.cpu.capability: NO AVX torch.version.cuda: None torch.version.hip: None torch.cuda.available: False torch.backends.cuda.is_built: False torch.backends.mps.is_built: True torch.backends.mps.is_available: True llama_cpp_python.version: 0.2.75 llama_cpp_python.supports_gpu_offload: True

lhawthorn commented 3 months ago

Adding @russellb as assignee per his note on Slack that he'd look into it. Assigning so it does not get lost in the weeds. :D

russellb commented 3 months ago

ilab model serve --model-family granite --model-path models/granite-7b-lab-Q4_K_M.gguf

thanks for the ping! fix on the way in a minute