OpenInterpreter / open-interpreter

A natural language interface for computers
http://openinterpreter.com/
GNU Affero General Public License v3.0
57.22k stars 4.91k forks source link

Jan.ai local model selection looks broken #1177

Open jayma777 opened 8 months ago

jayma777 commented 8 months ago

Describe the bug

When attempting to run "interpreter --local" and choosing jan.ai as the llm provider, the model choice function crashes interpreter.

LM_Studio runs as expected. (I'm assuming because it doesn't ask for a model)

Reproduce


Model is running, and is accessible via curl

(interpreter) [j@host interpreter]$ curl -s http://localhost:1337/v1/models | jq -r '.data[] | .id' | grep openhermes openhermes-neural-7b

Run interpreter local

(interpreter) [j@host interpreter]$ interpreter --local /home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/pydantic/_internal/_fields.py:151: UserWarning: Field "modelid" has conflict with protected namespace "model".

You may be able to resolve this warning by setting model_config['protected_namespaces'] = (). warnings.warn(

▌ Open Interpreter is compatible with several local model providers.

[?] What one would you like to use?: Llamafile Ollama LM Studio > Jan

To use use Open Interpreter with Jan, you will need to run Jan in the background.

1 Download Jan from https://jan.ai/, then start it.

2 Select a language model from the "Hub" tab, then click Download.

3 Copy the ID of the model and enter it below.

3 Click the Local API Server button in the bottom left, then click Start Server.

Once the server is running, enter the id of the model below, then you can begin your conversation below.

[?] Enter the id of the model you have running on Jan: openhermes-neural-7b Using Jan model: openhermes-neural-7b

hi
Traceback (most recent call last): (Full Traceback in "Additional Context")

Expected behavior

Jan.ai connection to work? :)

Screenshots

No response

Open Interpreter version

0.2.4

Python version

Python 3.11.8

Operating System name and version

Arch Linux: 6.8.2-arch2-1

Additional context

Full Traceback:

Traceback (most recent call last):                                                                                                            
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/litellm/main.py", line 646, in completion                               
    model, custom_llm_provider, dynamic_api_key, api_base = get_llm_provider(
                                                            ^^^^^^^^^^^^^^^^^                                                                 
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/litellm/utils.py", line 5759, in get_llm_provider                       
    raise e                                                                                                                                   
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/litellm/utils.py", line 5746, in get_llm_provider                       
    raise litellm.exceptions.BadRequestError(  # type: ignore                                                                                 
litellm.exceptions.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=openhermes-neural-7b
 Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers

During handling of the above exception, another exception occurred:                                                                                                                                                                                                                         

Traceback (most recent call last):             
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/interpreter/core/llm/llm.py", line 234, in fixed_litellm_completions    
    yield from litellm.completion(**params)                                                                                                   
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                                                                   
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/litellm/utils.py", line 2942, in wrapper                                
    raise e                                                                                                                                                                                                                                                                                 
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/litellm/utils.py", line 2840, in wrapper                                                                                                                                                                              
    result = original_function(*args, **kwargs)                                                                                                                                                                                                                                             
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                                                               
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/litellm/main.py", line 2109, in completion                                                                                                                                                                            
    raise exception_type(                                                                                                                                                                                                                                                                   
          ^^^^^^^^^^^^^^^                                                                                                                     
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/litellm/utils.py", line 8435, in exception_type                                                                                                                                                                       
    raise e                                                                                                                                   
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/litellm/utils.py", line 8403, in exception_type
    raise APIConnectionError(                                                                                                                                                                                                                                                               
litellm.exceptions.APIConnectionError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=openhermes-neural-7b
 Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers

During handling of the above exception, another exception occurred:                                
Traceback (most recent call last):                                                                                                                                                                                                                                                          
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/interpreter/core/respond.py", line 69, in respond                                                                                                                                                                     
    for chunk in interpreter.llm.run(messages_for_llm):                                                                                                                                                                                                                                     
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/interpreter/core/llm/llm.py", line 206, in run                                                                                                                                                                        
    yield from run_text_llm(self, params)                                                                                                                                                                                                                                                   
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/interpreter/core/llm/run_text_llm.py", line 19, in run_text_llm                                                                                                                                                       
    for chunk in llm.completions(**params):                                                                                                                                                                                                                                                 
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/interpreter/core/llm/llm.py", line 237, in fixed_litellm_completions                                                                                                                                                  
    raise first_error                                                                                                                                                                                                                                                                       
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/interpreter/core/llm/llm.py", line 218, in fixed_litellm_completions
    yield from litellm.completion(**params)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/litellm/utils.py", line 2942, in wrapper
    raise e
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/litellm/utils.py", line 2840, in wrapper
    result = original_function(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/litellm/main.py", line 2109, in completion
    raise exception_type(
          ^^^^^^^^^^^^^^^
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/litellm/utils.py", line 8435, in exception_type
    raise e
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/litellm/utils.py", line 8403, in exception_type
    raise APIConnectionError(
litellm.exceptions.APIConnectionError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=openhermes-neural-7b
 Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers

/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/pydantic/_internal/_fields.py:151: UserWarning: Field "model_id" has conflict with protected namespace "model_".

You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
  warnings.warn(
[IPKernelApp] WARNING | Parent appears to have exited, shutting down.
[IPKernelApp] WARNING | Parent appears to have exited, shutting down.

        Python Version: 3.11.8
        Pip Version: 24.0
        Open-interpreter Version: cmd: Open Interpreter 0.2.4 New Computer Update
, pkg: 0.2.4
        OS Version and Architecture: Linux-6.8.2-arch2-1-x86_64-with-glibc2.39
        CPU Info: 
        RAM Info: 62.45 GB, used: 4.00, free: 27.06

        # Interpreter Info

        Vision: False
        Model: openhermes-neural-7b
        Function calling: None
        Context window: 3000
        Max tokens: 1000

        Auto run: False
        API base: http://localhost:1337/v1
        Offline: True

        Curl output: [Errno 2] No such file or directory: 'curl http://localhost:1337/v1'

        # Messages

        System Message: You are Open Interpreter, a world-class programmer that can execute code on the user's machine.

        {'role': 'user', 'type': 'message', 'content': 'hi'}

Traceback (most recent call last):
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/litellm/main.py", line 646, in completion
    model, custom_llm_provider, dynamic_api_key, api_base = get_llm_provider(
                                                            ^^^^^^^^^^^^^^^^^
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/litellm/utils.py", line 5759, in get_llm_provider
    raise e
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/litellm/utils.py", line 5746, in get_llm_provider
    raise litellm.exceptions.BadRequestError(  # type: ignore
litellm.exceptions.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=openhermes-neural-7b
 Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/interpreter/core/llm/llm.py", line 234, in fixed_litellm_completions
    yield from litellm.completion(**params)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/litellm/utils.py", line 2942, in wrapper
    raise e
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/litellm/utils.py", line 2840, in wrapper
    result = original_function(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/litellm/main.py", line 2109, in completion
    raise exception_type(
          ^^^^^^^^^^^^^^^
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/litellm/utils.py", line 8435, in exception_type
    raise e
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/litellm/utils.py", line 8403, in exception_type
    raise APIConnectionError(
litellm.exceptions.APIConnectionError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=openhermes-neural-7b
Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/interpreter/core/respond.py", line 69, in respond
    for chunk in interpreter.llm.run(messages_for_llm):
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/interpreter/core/llm/llm.py", line 206, in run
    yield from run_text_llm(self, params)
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/interpreter/core/llm/run_text_llm.py", line 19, in run_text_llm
    for chunk in llm.completions(**params):
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/interpreter/core/llm/llm.py", line 237, in fixed_litellm_completions
    raise first_error
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/interpreter/core/llm/llm.py", line 218, in fixed_litellm_completions
    yield from litellm.completion(**params)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/litellm/utils.py", line 2942, in wrapper
    raise e
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/litellm/utils.py", line 2840, in wrapper
    result = original_function(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/litellm/main.py", line 2109, in completion
    raise exception_type(
          ^^^^^^^^^^^^^^^
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/litellm/utils.py", line 8435, in exception_type
    raise e
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/litellm/utils.py", line 8403, in exception_type
    raise APIConnectionError(
litellm.exceptions.APIConnectionError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=openhermes-neural-7b
 Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/j/.virtualenvs/interpreter/bin/interpreter", line 8, in <module>
    sys.exit(main())
             ^^^^^^
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/interpreter/terminal_interface/start_terminal_interface.py", line 437, in main
    start_terminal_interface(interpreter)
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/interpreter/terminal_interface/start_terminal_interface.py", line 415, in start_terminal_interface
    interpreter.chat()
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/interpreter/core/core.py", line 167, in chat
    for _ in self._streaming_chat(message=message, display=display):
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/interpreter/core/core.py", line 196, in _streaming_chat
    yield from terminal_interface(self, message)
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/interpreter/terminal_interface/terminal_interface.py", line 136, in terminal_interface
    for chunk in interpreter.chat(message, display=False, stream=True): 
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/interpreter/core/core.py", line 235, in _streaming_chat
    yield from self._respond_and_store()
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/interpreter/core/core.py", line 281, in _respond_and_store
    for chunk in respond(self):
  File "/home/j/.virtualenvs/interpreter/lib/python3.11/site-packages/interpreter/core/respond.py", line 115, in respond
    raise Exception(
Exception: Error occurred. LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=openhermes-neural-7b
 Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers
(interpreter) [j@host interpreter]$ [IPKernelApp] WARNING | Parent appears to have exited, shutting down.
[IPKernelApp] WARNING | Parent appears to have exited, shutting down.
MikeBirdTech commented 7 months ago

Can you please try running pip install --upgrade litellm

jayma777 commented 7 months ago

Short Reply:
Did command. No Joy.

Slightly longer reply: Everything is already the latest version according to pip. Got the same error.

Much Longer Reply: (venv) user@host [] $: pip install --upgrade litellm
Requirement already satisfied: litellm in ./venv/lib/python3.11/site-packages (1.35.1)
Requirement already satisfied: aiohttp in ./venv/lib/python3.11/site-packages (from litellm) (3.9.4)
Requirement already satisfied: click in ./venv/lib/python3.11/site-packages (from litellm) (8.1.7)
Requirement already satisfied: importlib-metadata>=6.8.0 in ./venv/lib/python3.11/site-packages (from litellm) (7.0.0)
Requirement already satisfied: jinja2<4.0.0,>=3.1.2 in ./venv/lib/python3.11/site-packages (from litellm) (3.1.3)
Requirement already satisfied: openai>=1.0.0 in ./venv/lib/python3.11/site-packages (from litellm) (1.17.0) Requirement already satisfied: python-dotenv>=0.2.0 in ./venv/lib/python3.11/site-packages (from litellm) (1.0.1) Requirement already satisfied: requests<3.0.0,>=2.31.0 in ./venv/lib/python3.11/site-packages (from litellm) (2.31.0)
Requirement already satisfied: tiktoken>=0.4.0 in ./venv/lib/python3.11/site-packages (from litellm) (0.5.2) Requirement already satisfied: tokenizers in ./venv/lib/python3.11/site-packages (from litellm) (0.15.2)
Requirement already satisfied: zipp>=0.5 in ./venv/lib/python3.11/site-packages (from importlib-metadata>=6.8.0->litellm) (3.18.1) Requirement already satisfied: MarkupSafe>=2.0 in ./venv/lib/python3.11/site-packages (from jinja2<4.0.0,>=3.1.2->litellm) (2.1.5)
Requirement already satisfied: anyio<5,>=3.5.0 in ./venv/lib/python3.11/site-packages (from openai>=1.0.0->litellm) (4.3.0) Requirement already satisfied: distro<2,>=1.7.0 in ./venv/lib/python3.11/site-packages (from openai>=1.0.0->litellm) (1.9.0)
Requirement already satisfied: httpx<1,>=0.23.0 in ./venv/lib/python3.11/site-packages (from openai>=1.0.0->litellm) (0.27.0)
Requirement already satisfied: pydantic<3,>=1.9.0 in ./venv/lib/python3.11/site-packages (from openai>=1.0.0->litellm) (2.7.0)
Requirement already satisfied: sniffio in ./venv/lib/python3.11/site-packages (from openai>=1.0.0->litellm) (1.3.1) Requirement already satisfied: tqdm>4 in ./venv/lib/python3.11/site-packages (from openai>=1.0.0->litellm) (4.66.2) Requirement already satisfied: typing-extensions<5,>=4.7 in ./venv/lib/python3.11/site-packages (from openai>=1.0.0->litellm) (4.9.0) Requirement already satisfied: charset-normalizer<4,>=2 in ./venv/lib/python3.11/site-packages (from requests<3.0.0,>=2.31.0->litellm) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in ./venv/lib/python3.11/site-packages (from requests<3.0.0,>=2.31.0->litellm) (3.6) Requirement already satisfied: urllib3<3,>=1.21.1 in ./venv/lib/python3.11/site-packages (from requests<3.0.0,>=2.31.0->litellm) (1.26.18) Requirement already satisfied: certifi>=2017.4.17 in ./venv/lib/python3.11/site-packages (from requests<3.0.0,>=2.31.0->litellm) (2024.2.2) Requirement already satisfied: regex>=2022.1.18 in ./venv/lib/python3.11/site-packages (from tiktoken>=0.4.0->litellm) (2023.12.25) Requirement already satisfied: aiosignal>=1.1.2 in ./venv/lib/python3.11/site-packages (from aiohttp->litellm) (1.3.1) Requirement already satisfied: attrs>=17.3.0 in ./venv/lib/python3.11/site-packages (from aiohttp->litellm) (23.2.0) Requirement already satisfied: frozenlist>=1.1.1 in ./venv/lib/python3.11/site-packages (from aiohttp->litellm) (1.4.1)
Requirement already satisfied: multidict<7.0,>=4.5 in ./venv/lib/python3.11/site-packages (from aiohttp->litellm) (6.0.5) Requirement already satisfied: yarl<2.0,>=1.0 in ./venv/lib/python3.11/site-packages (from aiohttp->litellm) (1.9.4)
Requirement already satisfied: huggingface_hub<1.0,>=0.16.4 in ./venv/lib/python3.11/site-packages (from tokenizers->litellm) (0.22.2) Requirement already satisfied: httpcore==1. in ./venv/lib/python3.11/site-packages (from httpx<1,>=0.23.0->openai>=1.0.0->litellm) (1.0.5)
Requirement already satisfied: h11<0.15,>=0.13 in ./venv/lib/python3.11/site-packages (from httpcore==1.
->httpx<1,>=0.23.0->openai>=1.0.0->litellm) (0.14.0)
Requirement already satisfied: filelock in ./venv/lib/python3.11/site-packages (from huggingface_hub<1.0,>=0.16.4->tokenizers->litellm) (3.13.4)
Requirement already satisfied: fsspec>=2023.5.0 in ./venv/lib/python3.11/site-packages (from huggingface_hub<1.0,>=0.16.4->tokenizers->litellm) (2024.3.1) Requirement already satisfied: packaging>=20.9 in ./venv/lib/python3.11/site-packages (from huggingface_hub<1.0,>=0.16.4->tokenizers->litellm) (23.2)
Requirement already satisfied: pyyaml>=5.1 in ./venv/lib/python3.11/site-packages (from huggingface_hub<1.0,>=0.16.4->tokenizers->litellm) (6.0.1)
Requirement already satisfied: annotated-types>=0.4.0 in ./venv/lib/python3.11/site-packages (from pydantic<3,>=1.9.0->openai>=1.0.0->litellm) (0.6.0) Requirement already satisfied: pydantic-core==2.18.1 in ./venv/lib/python3.11/site-packages (from pydantic<3,>=1.9.0->openai>=1.0.0->litellm) (2.18.1) (venv) user@host [] $:
(venv) user@host [] $: interpreter --local

▌ Open Interpreter is compatible with several local model providers.

[?] What one would you like to use?:
Llamafile Ollama LM Studio

Jan

To use use Open Interpreter with Jan, you will need to run Jan in the background.

1 Download Jan from https://jan.ai/, then start it.

2 Select a language model from the "Hub" tab, then click Download.

3 Copy the ID of the model and enter it below.

3 Click the Local API Server button in the bottom left, then click Start Server.

Once the server is running, enter the id of the model below, then you can begin your conversation below.

[?] Enter the id of the model you have running on Jan: mixtral-8x7b-32768

Using Jan model: mixtral-8x7b-32768

hi Traceback (most recent call last):
File "/home/j/venv/lib/python3.11/site-packages/litellm/main.py", line 659, in completion
model, custom_llm_provider, dynamic_api_key, api_base = get_llm_provider(
^^^^^^^^^^^^^^^^^ File "/home/j/venv/lib/python3.11/site-packages/litellm/utils.py", line 5853, in get_llm_provider raise e
File "/home/j/venv/lib/python3.11/site-packages/litellm/utils.py", line 5840, in get_llm_provider
raise litellm.exceptions.BadRequestError( # type: ignore litellm.exceptions.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=mixtral-8x7b-32768 Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/j/venv/lib/python3.11/site-packages/interpreter/core/llm/llm.py", line 234, in fixed_litellm_completions yield from litellm.completion(*params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/j/venv/lib/python3.11/site-packages/litellm/utils.py", line 2944, in wrapper raise e
File "/home/j/venv/lib/python3.11/site-packages/litellm/utils.py", line 2842, in wrapper result = original_function(
args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/j/venv/lib/python3.11/site-packages/litellm/main.py", line 2127, in completion raise exception_type(
^^^^^^^^^^^^^^^
File "/home/j/venv/lib/python3.11/site-packages/litellm/utils.py", line 8539, in exception_type
raise e
File "/home/j/venv/lib/python3.11/site-packages/litellm/utils.py", line 8507, in exception_type raise APIConnectionError( litellm.exceptions.APIConnectionError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=mixtral-8x7b-32768 Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/j/venv/lib/python3.11/site-packages/interpreter/core/respond.py", line 69, in respond for chunk in interpreter.llm.run(messages_for_llm): File "/home/j/venv/lib/python3.11/site-packages/interpreter/core/llm/llm.py", line 206, in run yield from run_text_llm(self, params) File "/home/j/venv/lib/python3.11/site-packages/interpreter/core/llm/run_text_llm.py", line 19, in run_text_llm for chunk in llm.completions(params): File "/home/j/venv/lib/python3.11/site-packages/interpreter/core/llm/llm.py", line 237, in fixed_litellm_completions raise first_error File "/home/j/venv/lib/python3.11/site-packages/interpreter/core/llm/llm.py", line 218, in fixed_litellm_completions yield from litellm.completion(params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/j/venv/lib/python3.11/site-packages/litellm/utils.py", line 2944, in wrapper raise e File "/home/j/venv/lib/python3.11/site-packages/litellm/utils.py", line 2842, in wrapper result = original_function(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/j/venv/lib/python3.11/site-packages/litellm/main.py", line 2127, in completion raise exception_type( ^^^^^^^^^^^^^^^ File "/home/j/venv/lib/python3.11/site-packages/litellm/utils.py", line 8539, in exception_type raise e File "/home/j/venv/lib/python3.11/site-packages/litellm/utils.py", line 8507, in exception_type raise APIConnectionError( litellm.exceptions.APIConnectionError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=mixtral-8x7b-32768 Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers

Running Jan 0.4.10 Appimage on Arch.

jayma777 commented 7 months ago

Ok... I got it to mostly work... (By following the documentation, and not just fumbling around like a chimp)

The process is a bit sub-optimal.

Need to add a "--api_key 'something'" to the command line (in addition to the documented line)
n.b.d. Just a slight change in the docs.

I run: interpreter --local --api_key "nunya" --api_base http://localhost:1337/v1 --model mixtral-8x7b-32768

It then asks which local provider I'd like to use.
[?] What one would you like to use?: Llamafile Ollama LM Studio

Jan

It then asks which model I'd like to use: (Which is in the command line)

[?] Enter the id of the model you have running on Jan: mixtral-8x7b-32768

Using Jan model: mixtral-8x7b-32768

Everything works after that.

IMHO, since it has the info needed from the command line, it should probably use that info? :)

In any event, feel free to close, or set as enhancement, whatever. I'm good to go here.