bonadio / autogenwebdemo

Simple implementation of Autogen with FastApi backend and React frontend
MIT License
104 stars 33 forks source link

[BUG] Last response is only being displayed #3

Closed rosx27 closed 8 months ago

rosx27 commented 8 months ago

I changed the code where it requires an openai key. this is the code i'm running now:

autogen_group_chat.py and autogen_chat.py

import json
def load_config():
    try:
        with open("OAI_CONFIG_LIST", "r") as data:
            print("Config found.")
            return json.load(data)
    except FileNotFoundError:
        return []

data = load_config()
print("Config loaded.")

llm_config_assistant = {
  "name": data[0]["name"],
  "model": data[0]["model"],
  "api_key": data[0]["api_key"],
  "api_base": data[0]["api_base"],
  "temperature": data[0]["temperature"],
}
llm_config_proxy = {
    "model": data[0]["model"],
    "api_key": data[0]["api_key"],
    "api_base": data[0]["api_base"],
    "temperature": data[0]["temperature"],
}
print("Config saved.")

although the app runs smoothly while using autogen_chat.py as reference in main.py: from autogen_chat import AutogenChat if i change it to from autogen_group_chat import AutogenChat so it will use the autogen_group_chat.py instead of autogen_chat.py, the responses during agent conversation are not being displayed on the webpage, only the last response is displayed.

this is the console log:

Config found.
Config loaded.
Config saved.
INFO:     Started server process [10704]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://localhost:8080 (Press CTRL+C to quit)
INFO:     ('::1', 63990) - "WebSocket /ws/de512857-6c8d-4a16-9237-2a38aede895c" [accepted]
Manager connected.
INFO:     connection open
Received text.
user_proxy (to chat_manager):

give me some creative ideas

--------------------------------------------------------------------------------
creator (to chat_manager):

Sure, here's one idea for a new project: why not create a digital scrapbook? With all of our memories stored in digital format these days, it would be great to compile them into a virtual album where you can share your favorite moments with friends and family.

--------------------------------------------------------------------------------
critic (to chat_manager):

Here's an idea for a new project: why not create a digital scrapbook? With all of our memories stored in digital format these days, it would be great to compile them into a virtual album where you can share your favorite moments with friends and family. TERMINATE

--------------------------------------------------------------------------------

screenshot: image

bonadio commented 8 months ago

Only messages directed to user_proxy will show, if you want to see messages from the other agents you can implement following the same pattern of userproxyweb, create a queue and a websocket for each agent

rosx27 commented 8 months ago

@bonadio thanks for the reply, i've tried your suggestion the whole day yesterday but was unable to implement it. I'm just starting to learn python so I used chatgpt and claude to help me but the suggested code was not working.

bonadio commented 8 months ago

@bonadio thanks for the reply, i've tried your suggestion the whole day yesterday but was unable to implement it. I'm just starting to learn python so I used chatgpt and claude to help me but the suggested code was not working.

Hi @rosx27, make the UI work on a multi agent setup will not be easy. The way I would do is to register a custom reply function to each agent and in this reply function you would send the data to a queue and then to the websocket. Check the register_reply function of the ConversableAgent. Good luck

rosx27 commented 8 months ago

@bonadio thanks for the reply, i've tried your suggestion the whole day yesterday but was unable to implement it. I'm just starting to learn python so I used chatgpt and claude to help me but the suggested code was not working.

Hi @rosx27, make the UI work on a multi agent setup will not be easy. The way I would do is to register a custom reply function to each agent and in this reply function you would send the data to a queue and then to the websocket. Check the register_reply function of the ConversableAgent. Good luck

ohh I see what you mean, checking and comparing your implementation of check_termination_and_human_reply made me think twice about the workflow I'm trying. I'll use prompting and just talk to one agent instead of multiple agents, if there is a code execution, it will be just silent and then a feedback from the agent. thank you!