OpenInterpreter / open-interpreter

A natural language interface for computers
http://openinterpreter.com/
GNU Affero General Public License v3.0
52.14k stars 4.6k forks source link

Problem using Ollama #1412

Open MikeBirdTech opened 3 weeks ago

MikeBirdTech commented 3 weeks ago

Describe the bug

When trying to use Ollama to power OI (via CLI or in Python library), it errors out before I am able to send a prompt.

Loading model...
Loading llama3.1...

Traceback (most recent call last):
  File "/Users/mike/Library/Python/3.11/bin/interpreter", line 8, in <module>
    sys.exit(main())
             ^^^^^^
  File "/Users/mike/Library/Python/3.11/lib/python/site-packages/interpreter/terminal_interface/start_terminal_interface.py", line 586, in main
    start_terminal_interface(interpreter)
  File "/Users/mike/Library/Python/3.11/lib/python/site-packages/interpreter/terminal_interface/start_terminal_interface.py", line 463, in start_terminal_interface
    interpreter = profile(
                  ^^^^^^^^
  File "/Users/mike/Library/Python/3.11/lib/python/site-packages/interpreter/terminal_interface/profiles/profiles.py", line 65, in profile
    return apply_profile(interpreter, profile, profile_path)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/mike/Library/Python/3.11/lib/python/site-packages/interpreter/terminal_interface/profiles/profiles.py", line 149, in apply_profile
    exec(profile["start_script"], scope, scope)
  File "<string>", line 1, in <module>
  File "/Users/mike/Library/Python/3.11/lib/python/site-packages/interpreter/core/core.py", line 145, in local_setup
    self = local_setup(self)
           ^^^^^^^^^^^^^^^^^
  File "/Users/mike/Library/Python/3.11/lib/python/site-packages/interpreter/terminal_interface/local_setup.py", line 305, in local_setup
    interpreter.computer.ai.chat("ping")
  File "/Users/mike/Library/Python/3.11/lib/python/site-packages/interpreter/core/computer/ai/ai.py", line 130, in chat
    for chunk in self.computer.interpreter.llm.run(messages):
  File "/Users/mike/Library/Python/3.11/lib/python/site-packages/interpreter/core/llm/llm.py", line 70, in run
    self.load()
  File "/Users/mike/Library/Python/3.11/lib/python/site-packages/interpreter/core/llm/llm.py", line 358, in load
    self.interpreter.computer.ai.chat("ping")
  File "/Users/mike/Library/Python/3.11/lib/python/site-packages/interpreter/core/computer/ai/ai.py", line 130, in chat
    for chunk in self.computer.interpreter.llm.run(messages):
  File "/Users/mike/Library/Python/3.11/lib/python/site-packages/interpreter/core/llm/llm.py", line 291, in run
    yield from run_tool_calling_llm(self, params)
  File "/Users/mike/Library/Python/3.11/lib/python/site-packages/interpreter/core/llm/run_tool_calling_llm.py", line 177, in run_tool_calling_llm
    for chunk in llm.completions(**request_params):
  File "/Users/mike/Library/Python/3.11/lib/python/site-packages/interpreter/core/llm/llm.py", line 420, in fixed_litellm_completions
    raise first_error  # If all attempts fail, raise the first error
    ^^^^^^^^^^^^^^^^^
  File "/Users/mike/Library/Python/3.11/lib/python/site-packages/interpreter/core/llm/llm.py", line 400, in fixed_litellm_completions
    yield from litellm.completion(**params)
  File "/Users/mike/Library/Python/3.11/lib/python/site-packages/litellm/llms/ollama.py", line 370, in ollama_completion_stream
    raise e
  File "/Users/mike/Library/Python/3.11/lib/python/site-packages/litellm/llms/ollama.py", line 348, in ollama_completion_stream
    function_call = json.loads(response_content)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/json/__init__.py", line 346, in loads
    return _default_decoder.decode(s)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/json/decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/json/decoder.py", line 353, in raw_decode
    obj, end = self.scan_once(s, idx)
               ^^^^^^^^^^^^^^^^^^^^^^
json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 2 (char 1)

Reproduce

Expected behavior

No error

Screenshots

No response

Open Interpreter version

Open Interpreter 0.3.7 The Beginning (Ty and Victor)

Python version

Python 3.11.7

Operating System name and version

MacOS

Additional context

ollama version is 0.3.6

Elunir commented 3 weeks ago

try gemma2 it will work

norzog commented 3 weeks ago

I am having the same error. switching to gemma2 worked to solve it for me but id still like to know if anyone knows a way to get llama3.1 working

bgrablin commented 3 weeks ago

Inform Open Interpreter that the language model you’re using does not support function calling, with: --no-llm_supports_functions

interpreter --api_base "http://localhost:11434" --model ollama/llama3.1 --no-llm_supports_functions

Also, make sure the version of Ollama that you're using supports llama3.1. You might have to update.