OpenInterpreter / open-interpreter

A natural language interface for computers
http://openinterpreter.com/
GNU Affero General Public License v3.0
52.63k stars 4.65k forks source link

Can't use any gemini model #1039

Open FelipeLujan opened 7 months ago

FelipeLujan commented 7 months ago

Describe the bug

Hi, trying to use any gemini model returns the following error.


Python Version: 3.10.12
        Pip Version: 23.1.2
        Open-interpreter Version: cmd:Interpreter, pkg: 0.2.0
        OS Version and Architecture: Linux-6.1.58+-x86_64-with-glibc2.35
        CPU Info: x86_64
        RAM Info: 12.67 GB, used: 0.73, free: 8.87

        # Interpreter Info

        Vision: False
        Model: gemini/gemini-pro
        Function calling: None
        Context window: None
        Max tokens: None

        Auto run: True
        API base: None
        Offline: False

        Curl output: Not local

        # Messages

        System Message: You are Open Interpreter, a world-class programmer that can complete any goal by executing code.
First, write a plan. **Always recap the plan between each code block** (you have extreme short-term memory loss, so you need to recap the plan between each message block to retain it).
When you execute code, it will be executed **on the user's machine**. The user has given you **full and complete permission** to execute any code necessary to complete the task. Execute the code.
If you want to send data between programming languages, save the data to a txt or json.
You can access the internet. Run **any code** to achieve the goal, and if at first you don't succeed, try again and again.
You can install new packages.
When a user refers to a filename, they're likely referring to an existing file in the directory you're currently executing code in.
Write messages to the user in Markdown.
In general, try to **make plans** with as few steps as possible. As for actually executing code to carry out that plan, for *stateful* languages (like python, javascript, shell, but NOT for html which starts from 0 every time) **it's critical not to try to do everything in one code block.** You should try something, print information about it, then continue from there in tiny, informed steps. You will never get it on the first try, and attempting it in one go will often lead to errors you cant see.
You are capable of **any** task.

        {'role': 'user', 'type': 'message', 'content': 'Please print hello world.'}

{'role': 'user', 'type': 'message', 'content': 'Please print hello world.'}

---------------------------------------------------------------------------
ParseError                                Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/google/protobuf/json_format.py](https://localhost:8080/#) in _ConvertFieldValuePair(self, js, message, path)
    543             continue
--> 544           raise ParseError(
    545               ('Message type "{0}" has no field named "{1}" at "{2}".\n'

ParseError: Message type "google.ai.generativelanguage.v1beta.GenerateContentResponse" has no field named "error" at "GenerateContentResponse".
 Available Fields(except extensions): "['candidates', 'promptFeedback']"

During handling of the above exception, another exception occurred:

ParseError                                Traceback (most recent call last)
37 frames
ParseError: Message type "google.ai.generativelanguage.v1beta.GenerateContentResponse" has no field named "error" at "GenerateContentResponse".
 Available Fields(except extensions): "['candidates', 'promptFeedback']"

During handling of the above exception, another exception occurred:

BadRequest                                Traceback (most recent call last)
BadRequest: 400 Unknown error trying to retrieve streaming response. Please retry with `stream=False` for more details.

During handling of the above exception, another exception occurred:

GeminiError                               Traceback (most recent call last)
GeminiError: 400 Unknown error trying to retrieve streaming response. Please retry with `stream=False` for more details.

During handling of the above exception, another exception occurred:

APIConnectionError                        Traceback (most recent call last)
APIConnectionError: 400 Unknown error trying to retrieve streaming response. Please retry with `stream=False` for more details.

During handling of the above exception, another exception occurred:

APIConnectionError                        Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/litellm/utils.py](https://localhost:8080/#) in exception_type(model, original_exception, custom_llm_provider, completion_kwargs)
   7601             exception_mapping_worked = True
   7602             if hasattr(original_exception, "request"):
-> 7603                 raise APIConnectionError(
   7604                     message=f"{str(original_exception)}",
   7605                     llm_provider=custom_llm_provider,

APIConnectionError: 400 Unknown error trying to retrieve streaming response. Please retry with `stream=False` for more details.

Reproduce

  1. using the google Colab linked in the Readme file

  2. set interpreter.llm.model = "gemini/gemini-pro"

  3. interpreter.llm.api_key = "abc123" (aistudio api key)

  4. authenticate with

    from google.colab import auth
    auth.authenticate_user(project_id="my_gcp_project")
  5. run interpreter.chat("Please print hello world.")

Expected behavior

hello world printed to the console

Screenshots

No response

Open Interpreter version

0.2.0

Python version

3.10.12

Operating System name and version

Linux-6.1.58+-x86_64-with-glibc2.35

Additional context

No response

kripper commented 7 months ago

Inference is now working, but see: https://github.com/KillianLucas/open-interpreter/issues/1050