OpenInterpreter / open-interpreter

A natural language interface for computers
http://openinterpreter.com/
GNU Affero General Public License v3.0
57.21k stars 4.91k forks source link

[llama-3.1 70B]Open Interpreter's Preps did not complete after setting the model #1371

Open mickitty0511 opened 4 months ago

mickitty0511 commented 4 months ago

Describe the bug

When reading your official doc about using ollama's module, I tried using llama 3.1 for open interpreter. However, errors were produced during the preps made after the module setup. I need a detailed resolution or explanation about what happened in my case. I hope some developers would reproduce this error and then tell me about this case.

Reproduce

Follow your official doc

Used this command

Then, open interpreter asked me if I would have new profile file. I did answer n.

Then error is as follows.

[2024-07-30T03:56:01Z ERROR cached_path::cache] ETAG fetch for https://huggingface.co/llama3.1/resolve/main/tokenizer.json failed with fatal error Traceback (most recent call last):

json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 2 (char 1)

Expected behavior

I suppose it should complete the preps based on what I read from your official docs.

Screenshots

No response

Open Interpreter version

0.3.4

Python version

3.11.5

Operating System name and version

Windows 11

Additional context

No response

uthpala1000 commented 4 months ago

got the same

GuHugo95 commented 3 months ago

same too

ViperGash commented 3 months ago

same.

GuHugo95 commented 3 months ago

It seems doesn't supported llama3.1

GuHugo95 commented 3 months ago

Maybe you can run llama run llama3 and use interpreter --model ollama/llama3 to use

CyanideByte commented 3 months ago

This PR should fix this issue. https://github.com/OpenInterpreter/open-interpreter/pull/1400

leafarilongamor commented 3 months ago

I'm still facing this issue on Windows 11, even running the latest OI, Ollama and Llama 3.1 versions.

PS C:\Users\User> interpreter --version
Open Interpreter 0.3.7 The Beginning (Ty and Victor)
PS C:\Users\User> ollama --version
ollama version is 0.3.6
PS C:\Users\User> interpreter --model ollama/llama3.1

▌ Model set to ollama/llama3.1

Loading llama3.1...

Traceback (most recent call last):
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\Scripts\interpreter.exe\__main__.py", line 7, in <module>
    sys.exit(main())
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\interpreter\terminal_interface\start_terminal_interface.py", line 586, in main
    start_terminal_interface(interpreter)
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\interpreter\terminal_interface\start_terminal_interface.py", line 540, in start_terminal_interface
    validate_llm_settings(
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\interpreter\terminal_interface\validate_llm_settings.py", line 110, in validate_llm_settings
    interpreter.llm.load()
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\interpreter\core\llm\llm.py", line 358, in load
    self.interpreter.computer.ai.chat("ping")
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\interpreter\core\computer\ai\ai.py", line 130, in chat
    for chunk in self.computer.interpreter.llm.run(messages):
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\interpreter\core\llm\llm.py", line 291, in run
    yield from run_tool_calling_llm(self, params)
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\interpreter\core\llm\run_tool_calling_llm.py", line 177, in run_tool_calling_llm
    for chunk in llm.completions(**request_params):
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\interpreter\core\llm\llm.py", line 420, in fixed_litellm_completions
    raise first_error  # If all attempts fail, raise the first error
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\interpreter\core\llm\llm.py", line 400, in fixed_litellm_completions
    yield from litellm.completion(**params)
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\litellm\llms\ollama.py", line 370, in ollama_completion_stream
    raise e
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\litellm\llms\ollama.py", line 348, in ollama_completion_stream
    function_call = json.loads(response_content)
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\json\__init__.py", line 346, in loads
    return _default_decoder.decode(s)
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\json\decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\json\decoder.py", line 353, in raw_decode
    obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 2 (char 1)
PS C:\Users\User>

I'm not sure about what I'm doing wrong.

MikeBirdTech commented 3 months ago

@mickitty0511 @leafarilongamor

I brought this up internally and it's being worked on! Thanks for raising the issue

UltraInstinct0x commented 3 months ago

Hi @MikeBirdTech, Same issue is on macOS also.

goku@192 ~ % interpreter --version
Open Interpreter 0.3.7 The Beginning (Ty and Victor)
goku@192 ~ % ollama -v
ollama version is 0.3.6

I am on macOS Version 15.0 Beta (24A5309e) if that makes any difference for you. Best!

wa008 commented 4 weeks ago

same issue

interpreter --version
Open Interpreter 0.4.3 Developer Preview

ollama -v
ollama version is 0.3.13
omarnahdi commented 3 weeks ago

@MikeBirdTech @leafarilongamor Did ya'll got the fix? I'm trying to load llama 3.2 and I'm getting the same error

CyanideByte commented 3 weeks ago

This will be fixed with the merge of this PR: https://github.com/OpenInterpreter/open-interpreter/pull/1524

If you want to try it early you can install it like this pip install --upgrade --force-reinstall git+https://github.com/CyanideByte/open-interpreter.git@local-fixes