Pathos14489 / Pantella

GNU General Public License v3.0
3 stars 1 forks source link

Project expects llama_cpp even if the user has a local LLM solution already (like Ooba) #1

Closed jonathonbarton closed 5 months ago

jonathonbarton commented 8 months ago
Traceback (most recent call last):
  File "C:\Projects\MantellaPathos\main.py", line 2, in <module>
    import src.conversation_manager as cm
  File "C:\Projects\MantellaPathos\src\conversation_manager.py", line 11, in <module>
    import src.language_model as language_models
  File "C:\Projects\MantellaPathos\src\language_model.py", line 15, in <module>
    module = importlib.import_module(f"src.llms.{module_name}")
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python311\Lib\importlib\__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Projects\MantellaPathos\src\llms\llama_cpp_python.py", line 5, in <module>
    from llama_cpp import Llama
ModuleNotFoundError: No module named 'llama_cpp'

Other settings from the config.ini that I see...

; inference_engine - The backend used to run the LLM / connect to the LLM API
; Options: default, openai(default), llama-cpp-python
; openai: Uses the OpenAI API, either using the default openai API or an alternative API base (set in the config below)
inference_engine = default

and

[llama_cpp_python]
; model_path - Only used if inference_engine is set to llama-cpp-python
; The path to the model you want to use with llama-cpp-python when running the model locally and don't want to use the openai API
; If you are using the openai API, leave this value as none
model_path = none

It appears (at first glance) that even when inference_engine is default or openai, llama_cpp is still expected. I did the (somewhat) obvious steps of trying to just pip llama_cpp into my machine (not found) I tried the two batch files in the main directory. They just open and close immediately.

Pathos14489 commented 8 months ago

There's a bat file included for installing llama-cpp-python correctly for your windows install. If you're using linux, you can find the install instructions here if you're trying to use llama-cpp-python: https://github.com/abetlen/llama-cpp-python

However I'll add a fix to this so its not required to have unless you're trying to use llama-cpp-python. Thanks for the heads up!

Edit: https://github.com/art-from-the-machine/Mantella/commit/4f1d1de797a27d67fcf84b2cfdf91aca49393983 Let me know if this works. <3

jonathonbarton commented 8 months ago

Both batch files open and close instantaneously. I made the code skip that by adding to the beginning of it (so it masquerades as an init file or whatever, expecting the openai to get picked up and... ...that didn't work either. It errored out in the same way. So I __'ed the openai version to JUST leave default. ...and that gave me a slightly different error message

  File "C:\Projects\MantellaPathos\main.py", line 2, in <module>
    import src.conversation_manager as cm
  File "C:\Projects\MantellaPathos\src\conversation_manager.py", line 11, in <module>
    import src.language_model as language_models
  File "C:\Projects\MantellaPathos\src\language_model.py", line 17, in <module>
    LLM_Types["default"] = LLM_Types[default]

And finally... w/r/t llama_cpp and the batch files to 'correctly install' llama_cpp - I already have a local LLM that suits me just fine, I don't feel like need another one. :-)

Pathos14489 commented 8 months ago

The openai inference engine is the default, if you changed the reference, it won't find anything to default to.