Open Nathan-Intergral opened 7 months ago
@WaelKarkoub @ekzhu Created as discussed
Let's start this after #2304 is merged
@Nathan-Intergral we are discussing a solution in this issue https://github.com/microsoft/autogen/issues/2583. Once the solution is merged we can close this issue.
I am sorry, but TransformMessages is not working in group chat. Was it not working all the time or is there a version it works??
@WebsheetPlugin we are working on a solution. cc @marklysze
@WaelKarkoub I got it. Thanks a lot.
I am not sure if it is related but I created this report: https://github.com/microsoft/autogen/issues/2638#issue-2288327906
For me transform works in group chat, but the transform results are not chained.
I'll have a look at this and start working on it... (adding the TransformMessages ability to the select speaker)
Just an update that we have started this and I'm working with @WaelKarkoub on some tweaks to the TransformMessages around filtering messages so that we can allow the user to target specific messages for transforms, which is key for the select speaker messages.
Hey @Nathan-Intergral, just a note that I've created a branch under AutoGen which will be for the addition of the speaker selection transforms functionality: https://github.com/microsoft/autogen/tree/selectspeakertransforms
At this stage it's functional, in that you can add transforms to the group chat by setting the select_speaker_auto_message_transforms
property of a groupchat to a TransformMessages object.
E.g.
debate_select_speaker_compression_args = dict(model_name="microsoft/llmlingua-2-xlm-roberta-large-meetingbank", use_llmlingua2=True, device_map="cpu")
debate_select_speaker_transforms = transform_messages.TransformMessages(
transforms=[
transforms.MessageHistoryLimiter(max_messages=10),
transforms.MessageTokenLimiter(max_tokens=3000, max_tokens_per_message=500, min_tokens=300),
transforms.TextMessageCompressor(
min_tokens=1000,
text_compressor=transforms.LLMLingua(
debate_select_speaker_compression_args, structured_compression=True
),
cache=InMemoryCache(seed=43),
filter_dict={"role": ["system"], "name": ["Debate_Moderator_Agent", "checking_agent"]},
exclude_filter=True,
), # Allows use of <llmlingua, compress=False></llmlingua> to stop compressing certain content
]
)
group_chat = autogen.GroupChat(
agents=self.agent_list,
messages=[],
max_round=8,
select_speaker_message_template="You are managing a debate and your only job is to select the next speaker, each speaker has a name. Follow the debate and decide on who should speak next. The 'Debate_Moderator_Agent' is the first speaker and they will kick off the debate with a topic to debate. Then each of the four debaters will speak and speak only once each. You should start by selecting the 'Affirmative_Constructive_Debater' to provide their opening arguments in the debate.",
select_speaker_prompt_template="Read the above conversation and your job role, which is managing the debate and choosing the next speaker. The valid speakers can be selected from this list {agentlist}. During the debate the order of debaters are: 1st is the 'Affirmative_Constructive_Debater', 2nd is the 'Negative_Constructive_Debater', 3rd is the 'Affirmative_Rebuttal_Debater', and 4th is the 'Negative_Rebuttal_Debater'. Then 5th will be the 'Debate_Judge' and 6th is the 'Debate_Moderator_Agent'.",
max_retries_for_selecting_speaker=1,
role_for_select_speaker_messages="user", # Ensure select speaker messages are role='user'
select_speaker_auto_verbose=True,
select_speaker_auto_message_transforms=debate_select_speaker_transforms,
)
I'll create a PR based on this branch, but please note that some of the filtering functionality (shown in the sample code above) is dependent on another PR being merged, #2661.
Okie dokie, #2719 has been created!
@marklysze Great thank you for that!
select_speaker_auto_message_transforms
I am getting this error: TypeError: GroupChat.init() got an unexpected keyword argument 'select_speaker_auto_message_transforms'
Any suggestions?
@gvijqb your installed package and version?
@gvijqb your installed package and version?
Output for:
pip list | grep autogen
autogen 0.3.2 pyautogen 0.3.2
Uninstall both and install: autogen-agentchat
. This is fixed the latest version of our package.
See read me for further info.
Uninstall both and install:
autogen-agentchat
. This is fixed the latest version of our package.See read me for further info.
Thanks. I am not sure if I am doing anything wrong here but it is still not working for me.
Error:
self.groupchat = autogen.GroupChat(
^^^^^^^^^^^^^^^^^^
TypeError: GroupChat.__init__() got an unexpected keyword argument 'select_speaker_auto_message_transforms'
Pip list output:
pip list | grep autogen
autogen-agentchat 0.2.38
I checked again, @gvijqb the argument is select_speaker_transform_messages
.
I checked again, @gvijqb the argument is
select_speaker_transform_messages
.
Thanks @ekzhu. This one's correct and it worked. But now I randomly get this error:
An assistant message with 'tool_calls' must be followed by tool messages responding to each 'tool_call_id'. The following tool_call_ids did not have response messages: toolu_01FSvDUeBkQLZF3rsrBzJbTR"
That could possibly mean that the tool call sequence might be getting disrupted. How can I handle this scenario? I have tried implementing a custom handler for preserving the tool call flow but that doesn't seem to be helpful so far. It still results in the above error at times.
@WaelKarkoub can message transform automatically handle tool call pairs?
@ekzhu @WaelKarkoub is there any update on this?
@gvijqb Could you debug into this? I suspect it is something to do with the TransformMessage handling of tool call pairs.
Describe the issue
When
GroupChat
speaker_selection_method
is set to'auto'
the speaker selection prompt used to determine the next agent contains all the messages in the conversation history. Attempting to addTransformMessages
to theGroupChatManager
should allow you to limit the length of each speaker selection prompt but it does not.This is due to the
select_speaker
function ingroupchat.py
usesconversable_agent.py
'sgenerate_oai_reply
. Which unlikegenerate_reply
does not callprocess_all_messages_before_reply
so the hook for transform messages is never run.Steps to reproduce
Screenshots and logs
No response
Additional Information
No response