OpenInterpreter / open-interpreter

A natural language interface for computers
http://openinterpreter.com/
GNU Affero General Public License v3.0
52.42k stars 4.62k forks source link

Exception when given the id of a fine-tuned model from OpenAI #388

Closed jovasque156 closed 11 months ago

jovasque156 commented 1 year ago

Describe the bug

I attempted to use a fine-tuned model from gpt3.5-turbo that I have associated with my API Key in OpenAI. After setting interpreter.model = 'id_of_my_fine-tuned_model', it raises a ValueError when trying to make a regular OpenAI call.

Below is the complete version of the error message. I have removed the id of my fine-tuned model and replaced it with the string id_fine-tuned_model.

I'm not certain if your project supports fine-tuned models in OpenAI. If it doesn't, please consider re-labelling this issue as an enhancement and potentially including this feature in the future.

Thank you so much for this project! It's amazing!

---------------------------------------------------------------------------
File [~/EvoAcademy/OpenInterpreter/interpreter-openai-env/lib/python3.10/site-packages/interpreter/interpreter.py:385](https://file+.vscode-resource.vscode-cdn.net/Users/jonathanvasquezverdugo/EvoAcademy/OpenInterpreter/~/EvoAcademy/OpenInterpreter/interpreter-openai-env/lib/python3.10/site-packages/interpreter/interpreter.py:385), in Interpreter.chat(self, message, return_messages)
    382 if message:
    383   # If it was, we respond non-interactivley
    384   self.messages.append({"role": "user", "content": message})
--> 385   self.respond()
    387 else:
    388   # If it wasn't, we start an interactive chat
    389   while True:

File [~/EvoAcademy/OpenInterpreter/interpreter-openai-env/lib/python3.10/site-packages/interpreter/interpreter.py:633](https://file+.vscode-resource.vscode-cdn.net/Users/jonathanvasquezverdugo/EvoAcademy/OpenInterpreter/~/EvoAcademy/OpenInterpreter/interpreter-openai-env/lib/python3.10/site-packages/interpreter/interpreter.py:633), in Interpreter.respond(self)
    631         time.sleep(3)
    632   else:
--> 633     raise Exception(error)
    635 elif self.local:
    636   # Code-Llama
    637 
   (...)
    640   # Convert messages to prompt
    641   # (This only works if the first message is the only system message)
    643   def messages_to_prompt(messages):

Exception: Traceback (most recent call last):
  File "/Users/jonathanvasquezverdugo/EvoAcademy/OpenInterpreter/interpreter-openai-env/lib/python3.10/site-packages/interpreter/interpreter.py", line 618, in respond
    response = litellm.completion(
  File "/Users/jonathanvasquezverdugo/EvoAcademy/OpenInterpreter/interpreter-openai-env/lib/python3.10/site-packages/litellm/utils.py", line 599, in wrapper
    raise e
  File "/Users/jonathanvasquezverdugo/EvoAcademy/OpenInterpreter/interpreter-openai-env/lib/python3.10/site-packages/litellm/utils.py", line 559, in wrapper
    result = original_function(*args, **kwargs)
  File "/Users/jonathanvasquezverdugo/EvoAcademy/OpenInterpreter/interpreter-openai-env/lib/python3.10/site-packages/litellm/timeout.py", line 44, in wrapper
    result = future.result(timeout=local_timeout_duration)
  File "/usr/local/Cellar/python@3.10/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/concurrent/futures/_base.py", line 458, in result
    return self.__get_result()
  File "/usr/local/Cellar/python@3.10/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
    raise self._exception
  File "/Users/jonathanvasquezverdugo/EvoAcademy/OpenInterpreter/interpreter-openai-env/lib/python3.10/site-packages/litellm/timeout.py", line 33, in async_func
    return func(*args, **kwargs)
  File "/Users/jonathanvasquezverdugo/EvoAcademy/OpenInterpreter/interpreter-openai-env/lib/python3.10/site-packages/litellm/main.py", line 865, in completion
    raise exception_type(
  File "/Users/jonathanvasquezverdugo/EvoAcademy/OpenInterpreter/interpreter-openai-env/lib/python3.10/site-packages/litellm/utils.py", line 2091, in exception_type
    raise original_exception
  File "/Users/jonathanvasquezverdugo/EvoAcademy/OpenInterpreter/interpreter-openai-env/lib/python3.10/site-packages/litellm/utils.py", line 2071, in exception_type
    raise original_exception
  File "/Users/jonathanvasquezverdugo/EvoAcademy/OpenInterpreter/interpreter-openai-env/lib/python3.10/site-packages/litellm/main.py", line 172, in completion
    get_llm_provider(model=model, custom_llm_provider=custom_llm_provider)
  File "/Users/jonathanvasquezverdugo/EvoAcademy/OpenInterpreter/interpreter-openai-env/lib/python3.10/site-packages/litellm/utils.py", line 1000, in get_llm_provider
    raise e
  File "/Users/jonathanvasquezverdugo/EvoAcademy/OpenInterpreter/interpreter-openai-env/lib/python3.10/site-packages/litellm/utils.py", line 997, in get_llm_provider
    raise ValueError(f"LLM Provider NOT provided. Pass in the LLM provider you are trying to call. E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/{model}',..)` Learn more: https://docs.litellm.ai/docs/providers")
ValueError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/id_fine-tuned_model',..)` Learn more: https://docs.litellm.ai/docs/providers

Reproduce

  1. Set model:

    interpreter.model = 'id_of_your_finetuned-model'

    If you are finetuning gpt-3.5-turbo, the id should be something like this: 'ft:gpt-3.5-turbo-0613:org:sufx:1aa1a1AAA"

  2. Call chat method with an instruction:

    interpreter.model.chat('Please print hello world.')

Expected behavior

Similar to the example of your colab but with the tone of my finetuned model.

Screenshots

No response

Open Interpreter version

0.1.4

Python version

3.10.13

Operating System name and version

macOS Ventura 13.5.1

Additional context

No response

ishaan-jaff commented 1 year ago

@jovasque156 can you set model="ft:gpt-3.5-turbo:my-org:custom_suffix:id", like this

litellm expects fine tuned gpt-3.5 to be passed like that: https://docs.litellm.ai/docs/tutorials/finetuned_chat_gpt

jovasque156 commented 1 year ago

I tried, but still got the error.

My finetuned model is on gpt-3.5-turbo-0613, therefore it has an extra -0613, like this: ft:gpt-3.5-turbo-0613:my-org:custom_suffix:id. It's the only difference with the format.

Would that be the issue?

ericrallen commented 11 months ago

I’m going to close this one as stale for now, but feel encouraged to reopen it if there is more to discuss or the issue is still present in the latest version.