Open sridhar21111976 opened 11 months ago
There is no such fine-grained printing mechanism yet. However, one workaround is to log all the history and retrieve the conversations that satisfy certain conditions by post-processing the log history. Check code examples about logging as follows:
Enable logging: https://github.com/microsoft/autogen/blob/main/test/agentchat/test_assistant_agent.py#L122
Check the logged info: https://github.com/microsoft/autogen/blob/main/test/agentchat/test_assistant_agent.py#L150
Please let me know if this does not address your needs.
Hi Qingyun-wu
The idea was to avoid all the intermedite outputs. So logging and parsing through the entire content gets to become complex. What might be useful is like a Verbose turn off and only get the final output post the TERMINATE exchange.
Also I am seeing the agent often does not recognise an existing function and says function does not exist - thought it identifies the right function needed. This is intermittent. Probably to do with some cache issue... not sure..
Also is there an option to flush cache, will be nice to understand what is level of informatin is cached.
https://microsoft.github.io/autogen/docs/reference/agentchat/conversable_agent#initiate_chat
set silent=True
to skip printing.
Get the chat messages
or last message
clear cache: https://microsoft.github.io/autogen/docs/reference/oai/completion#clear_cache
cache is made per ChatCompletion.create
request.
Thank you Sonichi, Will give that a try.. Any sampe code is appreciated.. This is great stuff team... I have been doing a LLM to LLM talk to achieve this so far... this is making it simple.... Bit more stability on function recognition and respecting the description text is needed...It sometimes ignores the text in the definition.
Some of the answers are worth adding to the documentation website.
Hi Sonichi,
I tried the last message option. Given the last interaction is a TERMINATE command to end conversation, the last message printed is TERMINATE. I have just worked around by asking the agent to format the final answer thru prompt as something like {answer} TERMINATE. Then I am trimming TERMINATE word from the final answer.... Is there any other elegant way to do this..?
Also have question around memory - what is the default chat history / memory length, any option to control or reset this.. I can see chat_history parameter - Boolean - is this the only option...?
Also if I build an application using Autogen - Is this expected to remain for ever and be supported..?
The chat history keeps growing in memory until https://microsoft.github.io/autogen/docs/reference/agentchat/conversable_agent#clear_history
No one can promise forever but look, autogen is known by the public for only 1 month and already has a big vibrant community. It's not going to die anytime soon.
Hi Sonichi,
I tried the last message option. Given the last interaction is a TERMINATE command to end conversation, the last message printed is TERMINATE. I have just worked around by asking the agent to format the final answer thru prompt as something like {answer} TERMINATE. Then I am trimming TERMINATE word from the final answer.... Is there any other elegant way to do this..?
Also have question around memory - what is the default chat history / memory length, any option to control or reset this.. I can see chat_history parameter - Boolean - is this the only option...?
Also if I build an application using Autogen - Is this expected to remain for ever and be supported..?
I still haven't found a way to not get any output from initiate_chat(), but it seems individual messages from the assistants can be returned with
print(list(assistant._oai_messages.values())[0][-3]['content'])
For me it was -3 because -1 was 'TERMINATE' and -2 was blank, but it could vary depending on the output you get.
I am trying to get the last message and for context, have called something like:
user_proxy.initiate_chat(group_chat_manager, message="This is my message")
This works and I get the full conversation. I'm trying to use the last_message() function, but when I do something like,
group_chat_manager.last_message()
It says group_chat_manager was not part of any recent conversations. I tried calling last_message with other agents that was part of the groupchat and I get the same error. Can someone please provide actual use of the function, linking to the documentation is not clear enough.
silent = True only works for the first message after that it again prints the messages
silent = True only works for the first message after that it again prints the messages
That sounds a bug. Help is appreciated!
cc @cheng-tan @Hk669 @giorgossideris @krishnashed @WaelKarkoub
I am trying to get the last message and for context, have called something like:
user_proxy.initiate_chat(group_chat_manager, message="This is my message")
This works and I get the full conversation. I'm trying to use the last_message() function, but when I do something like,
group_chat_manager.last_message()
It says group_chat_manager was not part of any recent conversations. I tried calling last_message with other agents that was part of the groupchat and I get the same error. Can someone please provide actual use of the function, linking to the documentation is not clear enough.
Could you try the new "ChatResult" returned from the chat? https://microsoft.github.io/autogen/docs/tutorial/conversation-patterns#group-chat
@Neeraj319 would you be able to open up a new issue? And if possible include a minimal example to reproduce it
silent = True only works for the first message after that it again prints the messages
That sounds a bug. Help is appreciated!
cc @cheng-tan @Hk669 @giorgossideris @krishnashed @WaelKarkoub
Sure @sonichi, let me check it out!
I have opened an issue: #2402
Not sure if this can help, but if you want to completely turn of the output by creating a silent console, something like this
class SilentConsole(console.IOConsole):
def print(self, *objects: Any, sep: str = " ", end: str = "\n", flush: bool = False) -> None:
pass
and set it as the default
base.IOStream.set_global_default(SilentConsole())
base.IOStream.set_default(SilentConsole())
Would be a useful feature if we can set the verbosity level for each agent in a group chat. As you can imagine, some parts of the conversation are not relevant/useful/interesting to the end user, and so we might not want to show it to them. As the conversation gets longer, the amount of stuff shown on a UI also gets longer, so being able to hide them will improve the end-user experience
Hi Team,
Is there a way to hide the transactions between the assistant agent and the user proxy agent and get only the required final output from assistant agent, once the user proxy agent has terminated the transaction..?
I trying to use this to solve complex scenarios involving multi step, but interested only in the final answer. unless a human input is required for a given step.