Open Soung-Low opened 1 month ago
Hey @Soung-Low, if you were able to run local models using Ollama, I'd recommend you try using the code in the PR #3056, which implements its own Ollama client class (api_type = 'ollama'). I think you should be able to have a conversation between two different providers.
Describe the issue
I am having issues to initiate a chat between two AssistantAgent, one being a local endpoint (using fastchat based on this page) and another being GPT 4 hosted in an Azure API. My code works perfectly fine for a conversation between two local model agents. But when I tried to get the local agent to interact with the API hosted agent, I ran into the following error. Could it be due to the difference in the
api_type
?Steps to reproduce
Codes for the initialisation of the agents:
Screenshots and logs
No response
Additional Information
No response