ParisNeo / lollms-webui

Lord of Large Language Models Web User Interface
https://parisneo.github.io/lollms-webui/
Apache License 2.0
4.11k stars 522 forks source link

[FEAT] Enable specifying models path via UI, possibly model map #453

Open kfsone opened 7 months ago

kfsone commented 7 months ago

I'm exploring various LM hosting options (ollama, ooba webui, lm studio, lollms, koboldcpp, etc) and for the most part I'm able to have them all share models in a single folder (/opt/ai/models and e:/models) but I end up having to fiddle with symlinks for LM Studio and Lollms. For ollama I just run it in a docker container and bindmount the models folder for it.

It would be great if the huggingface binding, at least, could take a path to the models directory, if not Elf. Even then, it seems like broader support might call for the introduction of a reasonably simple "modelmap.yaml" type file which lets you (at least) specify the user-facing model name, the model file or dir

- mistral-7b-instruct:
  - presents-as: Mistral-7B-Instruct-v0.1
  - folder: /opt/ai/models/TheBloke/Mistral-7B-Instruct-v01-GGUF  # optional
  - files:
    - tag: Q5_K_M
      file: mistral-7b-instruct-v0.1.Q5_K_M.gguf

- claude2:
  - file: /opt/ai/models/TheBloke/claude2-alpaca-13B-GGUF/claude2-alpaca-13b.Q5_K_M.gguf
kfsone commented 7 months ago

if I try to enable the huggingface binding and then specify a full local path without a structured path, e.g. /opt/ai/models/claude2-alpaca-13b.Q5_K_M.gguf:

ERROR:Lollms-WebUI:Exception on /add_reference_to_local_model [POST]
Traceback (most recent call last):
  File "/home/oliver/miniconda3/envs/lollms/lib/python3.10/site-packages/flask/app.py", line 1455, in wsgi_app
    response = self.full_dispatch_request()
  File "/home/oliver/miniconda3/envs/lollms/lib/python3.10/site-packages/flask/app.py", line 869, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/home/oliver/miniconda3/envs/lollms/lib/python3.10/site-packages/flask_cors/extension.py", line 176, in wrapped_function
    return cors_after_request(app.make_response(f(*args, **kwargs)))
  File "/home/oliver/miniconda3/envs/lollms/lib/python3.10/site-packages/flask/app.py", line 867, in full_dispatch_request
    rv = self.dispatch_request()
  File "/home/oliver/miniconda3/envs/lollms/lib/python3.10/site-packages/flask/app.py", line 852, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "/opt/ai/lollms-webui/app.py", line 1452, in add_reference_to_local_model
    self.config.reference_model(path)
  File "/opt/ai/lollms-webui/lollms_core/lollms/main_config.py", line 196, in reference_model
    folder_path = self.searchModelPath(model_name)
  File "/opt/ai/lollms-webui/lollms_core/lollms/main_config.py", line 145, in searchModelPath
    for mn in self.models_folders:
  File "/opt/ai/lollms-webui/lollms_core/lollms/config.py", line 298, in __getattr__
    return self.config[key]
KeyError: 'models_folders'
ParisNeo commented 7 months ago

Hi. In lollms I classify models by type. You can use the reference file to link to your models without copying them. I can add a function where you give it the path to a folder and it creates references to all files in it. Does that help?

kfsone commented 7 months ago

That would be great, by reference file do you mean symlink? Will that work on Windows? (I work with lin/mac/win and wsl)

GamingDaveUk commented 7 months ago

Also testing the app, I have all my models in a folder on a specific drive, would like to set Lollms to use the same model folder while keeping the rest of the data in its own folder