Closed xxshubhamxx closed 11 months ago
@BeibinLi
Yes, there is a draft PR integrating Gemini into AutoGen. See: https://github.com/microsoft/autogen/pull/979
Thanks, but I tried using the notebook given at https://github.com/microsoft/autogen/blob/7265ef1a3fb194e1d3d7345e47e4dc9a30ecd6fd/notebook/agentchat_gemini.ipynb but the UserProxyAgent still seems to require the OpenAI key. Without the user proxy agent, I am unable to initiate the chat. Is there any way we can use autogen only with gemini's api key?
@xxshubhamxx Got it! Sorry for misunderstanding your question. Because this PR is still under draft, you would need to install AutoGen with the draft version. I just tested the notebook again, and it works okay.
In terminal, type the following to fetch the AutoGen repository, checkout to the Gemini branch, and install AutoGen from the source code.
git fetch
git checkout gemini
pip install -e .
The above installation should resolve your issue. If you still encounter the "OpenAI key" error, can you try to setup the environment variable OPENAI_API_KEY to some random API key?
No issues, I tried installing the particular draft PR earlier. Now that I tried the gemini's branch, I am getting a different error, its not working:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[~\AppData\Local\Temp/ipykernel_7324/1050056662.py](https://file+.vscode-resource.vscode-cdn.net/d%3A/Sem%207/TRYING_AUTOGEN/autogen/notebook/~/AppData/Local/Temp/ipykernel_7324/1050056662.py) in <module>
----> 1 assistant = AssistantAgent("assistant",
2 llm_config={"config_list": config_list_gemini, "seed": 42},
3 max_consecutive_auto_reply=3)
4 # print(assistant.system_message)
5
[D:\Sem](file:///D:/Sem) 7\TRYING_AUTOGEN\autogen\autogen\agentchat\assistant_agent.py in __init__(self, name, system_message, llm_config, is_termination_msg, max_consecutive_auto_reply, human_input_mode, code_execution_config, description, **kwargs)
59 [ConversableAgent](conversable_agent#__init__).
60 """
---> 61 super().__init__(
62 name,
63 system_message,
[D:\Sem](file:///D:/Sem) 7\TRYING_AUTOGEN\autogen\autogen\agentchat\conversable_agent.py in __init__(self, name, system_message, is_termination_msg, max_consecutive_auto_reply, human_input_mode, function_map, code_execution_config, llm_config, default_auto_reply, description)
125 if isinstance(llm_config, dict):
126 self.llm_config.update(llm_config)
--> 127 self.client = OpenAIWrapper(**self.llm_config)
128
129 self._code_execution_config: Union[Dict, Literal[False]] = (
[D:\Sem](file:///D:/Sem) 7\TRYING_AUTOGEN\autogen\autogen\oai\client.py in __init__(self, config_list, **base_config)
91 if config_list:
...
--> 155 client = GeminiClient(**openai_config)
156 else:
157 client = OpenAI(**openai_config)
TypeError: object() takes no arguments
I copy pasted the given sample OAI_CONFIG_LIST and only put the Gemini's API keys, here's what it looks like:
[
{
"model": "gpt-35-turbo",
"api_key": "your OpenAI Key goes here",
"base_url": "https://tnrllmproxy.azurewebsites.net/v1",
"api_version": "2023-06-01-preview"
},
{
"model": "gpt-4-vision-preview",
"api_key": "your OpenAI Key goes here",
"api_version": "2023-06-01-preview"
},
{
"model": "dalle",
"api_key": "your OpenAI Key goes here",
"api_version": "2023-06-01-preview"
},
{
"model": "gemini-pro",
"api_key": "AIz-xxxxx",
"api_type": "google"
},
{
"model": "gemini-pro-vision",
"api_key": "AIz-xxxxx",
"api_type": "google"
}
]
Can you also some potential missing packages?
pip install "google-generativeai" "pydash" "pillow" "pydantic==1.10.13"
Thank you, this worked. Now I am able to use Autogen entirely with only Gemini's API.
@BeibinLi The gemini branch works fine for AssistentAgent but fails for GroupChat, with
/gemini.py", line 210, in oai_messages_to_gemini_messages assert rst[-1].role == "user", "The last message must be from the user role."
Would it be possible to support GroupChat for Gemini? That would be great
@io-q See "near" the end of the notebook: https://github.com/microsoft/autogen/blob/gemini/notebook/agentchat_gemini.ipynb
Let me know if this resolves your issue.
@xxshubhamxx Got it! Sorry for misunderstanding your question. Because this PR is still under draft, you would need to install AutoGen with the draft version. I just tested the notebook again, and it works okay.
In terminal, type the following to fetch the AutoGen repository, checkout to the Gemini branch, and install AutoGen from the source code.
git fetch git checkout gemini pip install -e .
The above installation should resolve your issue. If you still encounter the "OpenAI key" error, can you try to setup the environment variable OPENAI_API_KEY to some random API key?
@xxshubhamxx Got it! Sorry for misunderstanding your question. Because this PR is still under draft, you would need to install AutoGen with the draft version. I just tested the notebook again, and it works okay.
In terminal, type the following to fetch the AutoGen repository, checkout to the Gemini branch, and install AutoGen from the source code.
git fetch git checkout gemini pip install -e .
The above installation should resolve your issue. If you still encounter the "OpenAI key" error, can you try to setup the environment variable OPENAI_API_KEY to some random API key?
I followed all the steps to install the autogen with gemini branch but still encountering an issue "openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable". I already had autogen installed locally, would I need to delete that and reinstall it again?
@selimhanerhan Maybe yes. Can you try to uninstall all pyautogen packages, and then reinstall with the command above? If the error still persists, can you try to setup the OPENAI_API_KEY environment variable?
Autogen doesn't seem to be working on Google Colab. Am I missing something?
Code:
!git clone --single-branch --branch gemini https://github.com/microsoft/autogen
!pip install -e autogen
!pip install "google-generativeai" "pydash" "pillow" "pydantic"
import requests
import json
import pdb
import os
import re
from typing import Any, Callable, Dict, List, Optional, Tuple, Type, Union
import autogen
from autogen import AssistantAgent, Agent, UserProxyAgent, ConversableAgent
from autogen.agentchat.contrib.img_utils import get_image_data, _to_pil
from autogen.agentchat.contrib.multimodal_conversable_agent import MultimodalConversableAgent
from termcolor import colored
import random
from autogen.code_utils import DEFAULT_MODEL, UNKNOWN, content_str, execute_code, extract_code, infer_lang
Error:
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
[<ipython-input-12-f73761df9089>](https://localhost:8080/#) in <cell line: 10>()
8
9 import autogen
---> 10 from autogen import AssistantAgent, Agent, UserProxyAgent, ConversableAgent
11
12 from autogen.agentchat.contrib.img_utils import get_image_data, _to_pil
ImportError: cannot import name 'AssistantAgent' from 'autogen' (unknown location)
---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.
To view examples of installing some common dependencies, click the
"Open Examples" button below.
---------------------------------------------------------------------------
Is there a propper tutorial now? Quite confusing because so many people have answered this.
Thank you, this worked. Now I am able to use Autogen entirely with only Gemini's API.
Hi @xxshubhamxx ,
I was getting the exact same problem earlier where it was expecting openai's api too and i only wanted to use gemini-pro. I followed the exact same steps and also installed the below libraries "google-generativeai" "pydash" "pillow" "pydantic==1.10.13".
Right now, i am getting the below error, even when i am not using OpenAI anywhere in my code.
AttributeError: 'OpenAI' object has no attribute 'call'.
It would be great if you could help a bit here.
Sure @hardik-goel and @FeelsDaumenMan
First of all, ensure that you have installed Python 3.9.0
I recently got a new laptop and I installed Git and Python 3.12.1 on it and it didn't allow me to install the autogen
library using pip install -e .
So I switched back to Python 3.9.0
You can try using the other version as well. It just didn't work at that particular due to an unknown reason but should be fine now.
Here are the list of commands used:
git clone https://github.com/microsoft/autogen.git
cd autogen
git fetch
git checkout gemini
git pull origin gemini
python -m pip install -e .
python -m pip install "google-generativeai" "pydash" "pillow" "pydantic==1.10.13"
python -m pip install matplotlib
code .
These commands would setup the environment and open it in VSCode. After this, navigate to the notebook
folder and create a file named OAI_CONFIG_LIST
inside the notebook
directory with correct format like this:
[
{
"model": "gpt-35-turbo",
"api_key": "your OpenAI Key goes here",
"base_url": "https://tnrllmproxy.azurewebsites.net/v1",
"api_version": "2023-06-01-preview"
},
{
"model": "gpt-4-vision-preview",
"api_key": "your OpenAI Key goes here",
"api_version": "2023-06-01-preview"
},
{
"model": "dalle",
"api_key": "your OpenAI Key goes here",
"api_version": "2023-06-01-preview"
},
{
"model": "gemini-pro",
"api_key": "AIz-xxxxx",
"api_type": "google"
},
{
"model": "gemini-pro-vision",
"api_key": "AIz-xxxxx",
"api_type": "google"
}
]
After this navigate to agentchat_gemini and run it. The code blocks using only Gemini would work properly. Some code blocks are using OpenAI along with Gemini, you can replace config_list_gpt4 with config_list_gemini in those if you face any issues.
Feel free to comment down if you still face any issues and I'd be happy to help you as soon as I possibly can.
@hardik-goel can you please share the collab notebook 😁
Hey, I am looking for a way to use AutoGen without having an OpenAI key or enough VRAM to download decent open-source models locally. Gemini seems to be a very good option as it has a free developer API. So just wanted to check if it is possible to use autogen with gemini. I tried with the normal pyautogen package in local system by replacing the api_key but it didnt work. However, I do see a branch named gemini in this repository having no commits in the past 2 weeks : https://github.com/microsoft/autogen/tree/gemini
So I wanted to know how to use autgen with gemini?
Steps to reproduce
config_list = [ { 'model': 'gemini-pro', 'api_key': '', },
]
Screenshots and logs
No response
Additional Information
No response