OpenInterpreter / 01

The #1 open-source voice interface for desktop, mobile, and ESP32 chips.
https://01.openinterpreter.com/
GNU Affero General Public License v3.0
4.92k stars 517 forks source link

Selecting a model when choosing LM-studio #283

Open audiojak opened 3 months ago

audiojak commented 3 months ago

Describe the bug I can't select a different local model, e.g. "LM Studio Community/Meta-Llama-3-8B-Instruct-GGUF" when I choose LM-Studio on the CLI at start time. It defaults to "gpt-4".

I tried using a CLI flag. It didn't seem to respect the model flag

I think an option needs to be added here: https://github.com/OpenInterpreter/01/blob/main/software/source/server/utils/local_mode.py

To Reproduce Steps to reproduce the behavior:

  1. Start with this command: "poetry run 01 --local --model "LM Studio Community/Meta-Llama-3-8B-Instruct-GGUF""
  2. Choose LM-Studio
  3. Hit space to record.
  4. Error in LM-studio says that model "gpt-4" is not available

Expected behavior I would like to choose a model or type a model on the command line.

Desktop (please complete the following information): Mac M2