Closed gsteinLTU closed 2 months ago
Thanks @gsteinLTU, for clarity on what is needed - the response should just be a string if there are no tools, is that right? I think we should also make sure the response is the same if tools are returned.
I think that would be right, it turns out initiate_chats
is handling it correctly in the newest version, so this might not be so necessary (it's just a minor annoyance).
generate_reply
Thanks for noting that initiate_chat(s)
are performing as expected now, do you use generate_reply
? If so, we can continue to look into changing it for that for consistency.
Unfortunately, we decided not to go with autogen for our project, but personally, I'm not 100% sure what makes sense.
The best solution might be simply to make the documentation be more descriptive about the return types (it briefly explains when a None is returned, but not when str or dict are). IMO it should be consistent between models in either direction, likely making both models output a str when there's only content
in the dict so it's consistent with OpenAI's models.
Thanks for the update @gsteinLTU, agreed that for consistency it should align with the string return in that case. For further changes to the client classes I'll make a note to update the return type as well. Appreciated and I'll close this issue for now (feel free to open again, if needed).
Describe the bug
Using Groq in a ConversableAgent is not quite working. The responses from Groq models are returned as dicts, while many parts of the framework expect strs only. By itself,
generate_reply
still works (assuming you write code to handle this case),but for something like(after reinstalling, 0.2.34 does detect when the carryover is a dict)initiate_chats
you run into issues (or eveninitiate_chat
singular, which needs its termination condition to handle the different structure).It seems like the expected behavior is to not require significant other changes if you switch which LLM you're using.
According to msze on Discord:
Steps to reproduce
Example code to demonstrate issue:
This will result in:
Model Used
Any Groq model
Expected Behavior
The Groq models should return outputs that are compatible with the rest of AutoGen, whether the Groq models return strings now, or the other parts expecting strs are modified to handle a dict with a
content
field.Screenshots and logs
No response
Additional Information
AutoGen Version: 0.2.34 Python Version: 3.12