Is your feature request related to a problem? Please describe.
when running my model with vLLM interpreter --api_base https://xxxxxxxxxxxx-8000.proxy.runpod.net/v1 --model casperhansen/mixtral-instruct-awq when i execute a command. I get this error
System Message: You are Open Interpreter, a world-class programmer that can complete any goal by executing code.
First, write a plan. Always recap the plan between each code block (you have extreme short-term memory loss, so you need to recap the plan between each message block to retain it).
When you execute code, it will be executed on the user's machine. The user has given you full and complete permission to execute any code necessary to complete the task. Execute the code.
If you want to send data between programming languages, save the data to a txt or json.
You can access the internet. Run any code to achieve the goal, and if at first you don't succeed, try again and again.
You can install new packages.
When a user refers to a filename, they're likely referring to an existing file in the directory you're currently executing code in.
Write messages to the user in Markdown.
In general, try to make plans with as few steps as possible. As for actually executing code to carry out that plan, for stateful languages (like python, javascript, shell, but NOT for html which starts from 0 every time) it's critical not to try to do everything in one code block. You should try something, print information about it, then continue from there in tiny, informed steps. You will never get it on the first try, and attempting it in one go will often lead to errors you cant see.
You are capable of any** task.
{'role': 'user', 'type': 'message', 'content': 'list all files in root folder'}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/interpreter/core/llm/llm.py", line 221, in fixed_litellm_completions
yield from litellm.completion(**params)
TypeError: 'NoneType' object is not iterable
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/bin/interpreter", line 8, in
sys.exit(interpreter.start_terminal_interface())
File "/usr/local/lib/python3.10/dist-packages/interpreter/core/core.py", line 25, in start_terminal_interface
start_terminal_interface(self)
File "/usr/local/lib/python3.10/dist-packages/interpreter/terminal_interface/start_terminal_interface.py", line 684, in start_terminalinterface
interpreter.chat()
File "/usr/local/lib/python3.10/dist-packages/interpreter/core/core.py", line 86, in chat
for in self._streaming_chat(message=message, display=display):
File "/usr/local/lib/python3.10/dist-packages/interpreter/core/core.py", line 113, in _streaming_chat
yield from terminal_interface(self, message)
File "/usr/local/lib/python3.10/dist-packages/interpreter/terminal_interface/terminal_interface.py", line 135, in terminal_interface
for chunk in interpreter.chat(message, display=False, stream=True):
File "/usr/local/lib/python3.10/dist-packages/interpreter/core/core.py", line 148, in _streaming_chat
yield from self._respond_and_store()
File "/usr/local/lib/python3.10/dist-packages/interpreter/core/core.py", line 194, in _respond_and_store
for chunk in respond(self):
File "/usr/local/lib/python3.10/dist-packages/interpreter/core/respond.py", line 49, in respond
for chunk in interpreter.llm.run(messages_for_llm):
File "/usr/local/lib/python3.10/dist-packages/interpreter/core/llm/llm.py", line 193, in run
yield from run_text_llm(self, params)
File "/usr/local/lib/python3.10/dist-packages/interpreter/core/llm/run_text_llm.py", line 19, in run_text_llm
for chunk in llm.completions(params):
File "/usr/local/lib/python3.10/dist-packages/interpreter/core/llm/llm.py", line 224, in fixed_litellm_completions
raise first_error
File "/usr/local/lib/python3.10/dist-packages/interpreter/core/llm/llm.py", line 205, in fixed_litellm_completions
yield from litellm.completion(params)
TypeError: 'NoneType' object is not iterable**
Describe the solution you'd like
I thought that becaise you support llm studio you would support vLLM because it has an openai compatible api but it seems you dont support it
Is your feature request related to a problem? Please describe.
when running my model with vLLM interpreter --api_base https://xxxxxxxxxxxx-8000.proxy.runpod.net/v1 --model casperhansen/mixtral-instruct-awq when i execute a command. I get this error System Message: You are Open Interpreter, a world-class programmer that can complete any goal by executing code. First, write a plan. Always recap the plan between each code block (you have extreme short-term memory loss, so you need to recap the plan between each message block to retain it). When you execute code, it will be executed on the user's machine. The user has given you full and complete permission to execute any code necessary to complete the task. Execute the code. If you want to send data between programming languages, save the data to a txt or json. You can access the internet. Run any code to achieve the goal, and if at first you don't succeed, try again and again. You can install new packages. When a user refers to a filename, they're likely referring to an existing file in the directory you're currently executing code in. Write messages to the user in Markdown. In general, try to make plans with as few steps as possible. As for actually executing code to carry out that plan, for stateful languages (like python, javascript, shell, but NOT for html which starts from 0 every time) it's critical not to try to do everything in one code block. You should try something, print information about it, then continue from there in tiny, informed steps. You will never get it on the first try, and attempting it in one go will often lead to errors you cant see. You are capable of any** task.
Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/interpreter/core/llm/llm.py", line 221, in fixed_litellm_completions yield from litellm.completion(**params) TypeError: 'NoneType' object is not iterable
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/usr/local/bin/interpreter", line 8, in
sys.exit(interpreter.start_terminal_interface())
File "/usr/local/lib/python3.10/dist-packages/interpreter/core/core.py", line 25, in start_terminal_interface
start_terminal_interface(self)
File "/usr/local/lib/python3.10/dist-packages/interpreter/terminal_interface/start_terminal_interface.py", line 684, in start_terminalinterface
interpreter.chat()
File "/usr/local/lib/python3.10/dist-packages/interpreter/core/core.py", line 86, in chat
for in self._streaming_chat(message=message, display=display):
File "/usr/local/lib/python3.10/dist-packages/interpreter/core/core.py", line 113, in _streaming_chat
yield from terminal_interface(self, message)
File "/usr/local/lib/python3.10/dist-packages/interpreter/terminal_interface/terminal_interface.py", line 135, in terminal_interface
for chunk in interpreter.chat(message, display=False, stream=True):
File "/usr/local/lib/python3.10/dist-packages/interpreter/core/core.py", line 148, in _streaming_chat
yield from self._respond_and_store()
File "/usr/local/lib/python3.10/dist-packages/interpreter/core/core.py", line 194, in _respond_and_store
for chunk in respond(self):
File "/usr/local/lib/python3.10/dist-packages/interpreter/core/respond.py", line 49, in respond
for chunk in interpreter.llm.run(messages_for_llm):
File "/usr/local/lib/python3.10/dist-packages/interpreter/core/llm/llm.py", line 193, in run
yield from run_text_llm(self, params)
File "/usr/local/lib/python3.10/dist-packages/interpreter/core/llm/run_text_llm.py", line 19, in run_text_llm
for chunk in llm.completions(params):
File "/usr/local/lib/python3.10/dist-packages/interpreter/core/llm/llm.py", line 224, in fixed_litellm_completions
raise first_error
File "/usr/local/lib/python3.10/dist-packages/interpreter/core/llm/llm.py", line 205, in fixed_litellm_completions
yield from litellm.completion(params)
TypeError: 'NoneType' object is not iterable**
Describe the solution you'd like
I thought that becaise you support llm studio you would support vLLM because it has an openai compatible api but it seems you dont support it
Describe alternatives you've considered
It would be nice to add. support for vLLM
Additional context
No response