Open yepher opened 11 months ago
This Fork are the changes that worked for me, along with some extra documentation and a requirements.txt
I am not sure if this is the correct answer but I changed the imports in app.py to:
import os
import chainlit as cl
#import autogen
from flaml import autogen
and did:
pip install "flaml[openai]"
pip install "flaml[autogen]"
And now it works by running ./.venv/bin/chainlit run app.py
That was based on a search I did for the error and found this colab notebook that had similar code for the part that was failing.
Not sure if it is needed but I created a file called OAI_CONFIG_LIST
and put the following in it:
[
{
"model": "gpt-4",
"api_key": "sk-..."
},
{
"model": "gpt-3.5-turbo",
"api_key": "sk-..."
}
]
I was trying to follow the code for Integrate AutoGen into your Chatbot: Code Interpreter Clone The article and repo do not seem to have much detail on how to run this and I am not familiar with chainlit.
Here is what I tried:
I did see in the comments of the Medium article that others had an issue, too, but I believe the author updated the code to fix the problem.