OpenInterpreter / open-interpreter

A natural language interface for computers
http://openinterpreter.com/
GNU Affero General Public License v3.0
57.21k stars 4.91k forks source link

Crashes when using -lsv with LLaVA #1153

Open crslim opened 8 months ago

crslim commented 8 months ago

Describe the bug

Crashes when using -m ollama/LLaVA -lsv, works fine without -lsv parameter

Reproduce

Run interpreter -m ollama/llava -lsv

Ask for a visual description

Expected behavior

Not a crash

Screenshots

PS C:\Windows\system32> interpreter -lsv -y -m ollama/LLaVA

▌ A new version of Open Interpreter is available.

▌ Please run: pip install --upgrade open-interpreter

───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Describe image C:\image_for_describe.jpg

    Python Version: 3.12.2
    Pip Version: 24.0
    Open-interpreter Version: cmd:Interpreter, pkg: 0.2.0
    OS Version and Architecture: Windows-11-10.0.22631-SP0
    CPU Info: Intel64 Family 6 Model 158 Stepping 10, GenuineIntel
    RAM Info: 31.80 GB, used: 11.54, free: 20.26

    # Interpreter Info

    Vision: True
    Model: ollama/LLaVA
    Function calling: None
    Context window: None
    Max tokens: None

    Auto run: True
    API base: None
    Offline: False

    Curl output: Not local

    # Messages

    System Message: You are Open Interpreter, a world-class programmer that can complete any goal by executing code.

First, write a plan. Always recap the plan between each code block (you have extreme short-term memory loss, so you need to recap the plan between each message block to retain it). When you execute code, it will be executed on the user's machine. The user has given you full and complete permission to execute any code necessary to complete the task. Execute the code. If you want to send data between programming languages, save the data to a txt or json. You can access the internet. Run any code to achieve the goal, and if at first you don't succeed, try again and again. You can install new packages. When a user refers to a filename, they're likely referring to an existing file in the directory you're currently executing code in. Write messages to the user in Markdown. In general, try to make plans with as few steps as possible. As for actually executing code to carry out that plan, for stateful languages (like python, javascript, shell, but NOT for html which starts from 0 every time) it's critical not to try to do everything in one code block. You should try something, print information about it, then continue from there in tiny, informed steps. You will never get it on the first try, and attempting it in one go will often lead to errors you cant see. You are capable of any task.

    {'role': 'user', 'type': 'message', 'content': 'Describe image C:\\image_for_describe.jpg'}

{'role': 'user', 'type': 'image', 'format': 'path', 'content': 'C:\image_for_describe.jpg'}

Traceback (most recent call last): File "C:\Program Files\Python312\Lib\site-packages\interpreter\core\llm\llm.py", line 221, in fixed_litellm_completions yield from litellm.completion(**params) File "C:\Program Files\Python312\Lib\site-packages\litellm\llms\ollama.py", line 260, in ollama_completion_stream raise e File "C:\Program Files\Python312\Lib\site-packages\litellm\llms\ollama.py", line 248, in ollama_completion_stream status_code=response.status_code, message=response.text ^^^^^^^^^^^^^ File "C:\Program Files\Python312\Lib\site-packages\httpx_models.py", line 576, in text content = self.content ^^^^^^^^^^^^ File "C:\Program Files\Python312\Lib\site-packages\httpx_models.py", line 570, in content raise ResponseNotRead() httpx.ResponseNotRead: Attempted to access streaming response content, without having called read().

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "", line 198, in _run_module_as_main File "", line 88, in _run_code File "C:\Program Files\Python312\Scripts\interpreter.exe__main__.py", line 7, in File "C:\Program Files\Python312\Lib\site-packages\interpreter\core\core.py", line 25, in start_terminal_interface start_terminal_interface(self) File "C:\Program Files\Python312\Lib\site-packages\interpreter\terminal_interface\start_terminal_interface.py", line 684, in start_terminalinterface interpreter.chat() File "C:\Program Files\Python312\Lib\site-packages\interpreter\core\core.py", line 86, in chat for in self._streaming_chat(message=message, display=display): File "C:\Program Files\Python312\Lib\site-packages\interpreter\core\core.py", line 113, in _streaming_chat yield from terminal_interface(self, message) File "C:\Program Files\Python312\Lib\site-packages\interpreter\terminal_interface\terminal_interface.py", line 135, in terminal_interface for chunk in interpreter.chat(message, display=False, stream=True): File "C:\Program Files\Python312\Lib\site-packages\interpreter\core\core.py", line 148, in _streaming_chat yield from self._respond_and_store() File "C:\Program Files\Python312\Lib\site-packages\interpreter\core\core.py", line 194, in _respond_and_store for chunk in respond(self): File "C:\Program Files\Python312\Lib\site-packages\interpreter\core\respond.py", line 49, in respond for chunk in interpreter.llm.run(messages_for_llm): File "C:\Program Files\Python312\Lib\site-packages\interpreter\core\llm\llm.py", line 193, in run yield from run_text_llm(self, params) File "C:\Program Files\Python312\Lib\site-packages\interpreter\core\llm\run_text_llm.py", line 19, in run_text_llm for chunk in llm.completions(params): File "C:\Program Files\Python312\Lib\site-packages\interpreter\core\llm\llm.py", line 224, in fixed_litellm_completions raise first_error File "C:\Program Files\Python312\Lib\site-packages\interpreter\core\llm\llm.py", line 205, in fixed_litellm_completions yield from litellm.completion(params) File "C:\Program Files\Python312\Lib\site-packages\litellm\llms\ollama.py", line 260, in ollama_completion_stream raise e File "C:\Program Files\Python312\Lib\site-packages\litellm\llms\ollama.py", line 248, in ollama_completion_stream status_code=response.status_code, message=response.text ^^^^^^^^^^^^^ File "C:\Program Files\Python312\Lib\site-packages\httpx_models.py", line 576, in text content = self.content ^^^^^^^^^^^^ File "C:\Program Files\Python312\Lib\site-packages\httpx_models.py", line 570, in content raise ResponseNotRead() httpx.ResponseNotRead: Attempted to access streaming response content, without having called read().

Open Interpreter version

0.2.0

Python version

3.12.2

Operating System name and version

Windows 11 Home 64b

Additional context

No response

Notnaton commented 8 months ago

I don't think python 3.12 is supported yet. Please downgrade to 3.11

Notnaton commented 8 months ago

~What is the -lsv flag? I can't find it in the docs.~

Nvm, try -lsv true

crslim commented 8 months ago

I got this error interpreter: error: unrecognized arguments: true

Same error on 3.11

elhakimz commented 6 months ago

Same error with me , Python 3.11.5 , Windows

z82134359 commented 6 months ago

Try a smaller model, and I have also encountered this issue, which seems to be related to the response speed of the model, will report this if its generation speed is slow

Answer translated by software

crslim commented 6 months ago

Try a smaller model, and I have also encountered this issue, which seems to be related to the response speed of the model, will report this if its generation speed is slow

Answer translated by software

What model/ollama,lm-studio do you recommend?

crslim commented 6 months ago

It seems that you are having a problem with an httpx.ResponseNotRead exception when trying to access the contents of a streaming response without having called the read() method. This error occurs when you try to get the content of an HTTP response before the response has been completely read. Here is a step-by-step plan to solve this problem:

Verify the Response: Make sure that the HTTP response you are trying to read is a streaming response and is ready to be read. Calling the Read Method: Before accessing the content of the response, you must call the read() method on the response object to ensure that the content has been fully loaded. Exception Handling: Implement proper exception handling to catch and handle any errors that may occur while reading the response. Here is an example of how you could modify your code to handle this error:

Piton

import httpx

attempt: with httpx.Client() as client: response = client.get('your_url_here') response.read() # Make sure to call read() before accessing the content content = response.text

Process content here

except httpx.ResponseNotRead as e: print(f"Error reading response: {e}")

AI generated code. Review and use carefully. This is a general example and you will need to adapt it to your specific situation.