Open shouryan01 opened 9 months ago
I think a fix for this has been merged, should be available next update. If you want to test it out, you can install the latest directly from the github repo (keep in mind it can break):
pip install git
pip install git+https://github.com/KillianLucas/open-interpreter.git
Hi @shouryan01 Can you please try with the latest version of OI?
Describe the bug
To confirm the model works, I asked it to write a haiku:
Next, to test the project, I asked it for python script for hello world
It asked me if I want to execute the script, to which I replied yes. This is when it failed. Here is the complete output:
First, write a plan. Always recap the plan between each code block (you have extreme short-term memory loss, so you need to recap the plan between each message block to retain it). When you execute code, it will be executed on the user's machine. The user has given you full and complete permission to execute any code necessary to complete the task. Execute the code. If you want to send data between programming languages, save the data to a txt or json. You can access the internet. Run any code to achieve the goal, and if at first you don't succeed, try again and again. You can install new packages. When a user refers to a filename, they're likely referring to an existing file in the directory you're currently executing code in. Write messages to the user in Markdown. In general, try to make plans with as few steps as possible. As for actually executing code to carry out that plan, for stateful languages (like python, javascript, shell, but NOT for html which starts from 0 every time) it's critical not to try to do everything in one code block. You should try something, print information about it, then continue from there in tiny, informed steps. You will never get it on the first try, and attempting it in one go will often lead to errors you cant see. You are capable of any task.
{'role': 'assistant', 'type': 'message', 'content': 'Sure! Here\'s a Python script that prints "Hello, World!" to the console:\n\n
python\nprint("Hello, World!")\n
\n\nWould you like me to execute this script for you?'}{'role': 'user', 'type': 'message', 'content': 'Yes'}
Traceback (most recent call last): File "", line 198, in _run_module_as_main
File "", line 88, in _run_code
File "C:\Users[redacted]\Personal Projects\openinterpreter\Scripts\interpreter.exe__main__.py", line 7, in
File "C:\Users[redacted]\Personal Projects\openinterpreter\Lib\site-packages\interpreter\core\core.py", line 25, in start_terminal_interface
start_terminal_interface(self)
File "C:\Users[redacted]\Personal Projects\openinterpreter\Lib\site-packages\interpreter\terminal_interface\start_terminal_interface.py", line 684, in start_terminalinterface
interpreter.chat()
File "C:\Users[redacted]\Personal Projects\openinterpreter\Lib\site-packages\interpreter\core\core.py", line 86, in chat
for in self._streaming_chat(message=message, display=display):
File "C:\Users[redacted]\Personal Projects\openinterpreter\Lib\site-packages\interpreter\core\core.py", line 113, in _streaming_chat
yield from terminal_interface(self, message)
File "C:\Users[redacted]\Personal Projects\openinterpreter\Lib\site-packages\interpreter\terminal_interface\terminal_interface.py", line 135, in terminal_interface
for chunk in interpreter.chat(message, display=False, stream=True):
File "C:\Users[redacted]\Personal Projects\openinterpreter\Lib\site-packages\interpreter\core\core.py", line 148, in _streaming_chat
yield from self._respond_and_store()
File "C:\Users[redacted]\Personal Projects\openinterpreter\Lib\site-packages\interpreter\core\core.py", line 194, in _respond_and_store
for chunk in respond(self):
File "C:\Users[redacted]\Personal Projects\openinterpreter\Lib\site-packages\interpreter\core\respond.py", line 49, in respond
for chunk in interpreter.llm.run(messages_for_llm):
File "C:\Users[redacted]\Personal Projects\openinterpreter\Lib\site-packages\interpreter\core\llm\llm.py", line 191, in run
yield from run_function_calling_llm(self, params)
File "C:\Users[redacted]\Personal Projects\openinterpreter\Lib\site-packages\interpreter\core\llm\run_function_calling_llm.py", line 66, in run_function_calling_llm
arguments = parse_partial_json(arguments)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users[redacted]\Personal Projects\openinterpreter\Lib\site-packages\interpreter\core\llm\utils\parse_partial_json.py", line 8, in parse_partial_json
return json.loads(s)
^^^^^^^^^^^^^
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.12_3.12.752.0_x64qbz5n2kfra8p0\Lib\json\init__.py", line 339, in loads
raise TypeError(f'the JSON object must be str, bytes or bytearray, '
TypeError: the JSON object must be str, bytes or bytearray, not NoneType
Reproduce
Command run:
interpreter --model azure/gpt-35-turbo-16k --temperature 0.11 --context_window 16000 --llm_supports_functions
Model version: 0613 (which according to OpenAI supports function calling.Expected behavior
It would run the print() statement and output "Hello, World!"
Screenshots
No response
Open Interpreter version
0.2.0
Python version
Tested on: Python 3.11.8 and Python 3.12.2
Operating System name and version
Windows 10 Enterprise
Additional context
No response