crewAIInc / crewAI

Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
https://crewai.com
MIT License
20.62k stars 2.86k forks source link

Support for using Azure OpenAI as the llm for Agents #202

Closed ppk5 closed 2 months ago

ppk5 commented 9 months ago

We have an instance of Azure OpenAI but when using that as the llm, I see generation assert error. But, llm([message]) works Is there any sort of workaround to get Azure OpenAI to work ?

Attaching the screenshot of my crew.py

image

Here is my AzureOpenAI.py image

In Docs I could see that Azure hosted OpenAI endpoints are supported.

Any suggestions on this @joaomdmoura ?

ppk5 commented 9 months ago

Referred https://joaomdmoura.github.io/crewAI/how-to/LLM-Connections/#azure-open-ai Got the the llm object that responds to the humanMessage but when I use with agents, I am getting this error.

/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 242, in stream
    assert generation is not None
AssertionError

Below snap shows that the llm object responds to Human messages

image
ppk5 commented 9 months ago

LangChain Core is having a branching in chatmodels.py which causes the code to fail when the AzureChatOpenAI object does not support streaming.

Reported the issue upstream https://github.com/langchain-ai/langchain/issues/16930

hcchengithub commented 7 months ago

I am also dealing with the Azure Chat OpenAI issue for a company project. Let me see if I can help. Please simplify your code to the bare minimum and make sure that calling OpenAI directly works properly with the entire CrewAI crew.

scenaristeur commented 7 months ago

hi, according to https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.RunnableAgent.html, runnableAgent has a stream_runnable option. it is possible to not use stream modifying crewai/agent.py on line 280 from

self.agent_executor = CrewAgentExecutor(
            agent=RunnableAgent(runnable=inner_agent), **executor_args
)

to

self.agent_executor = CrewAgentExecutor(
            agent=RunnableAgent(runnable=inner_agent, stream_runnable=False), **executor_args
)

it allows to run crewAi with a custom LLM without " assert generation is not None" error . tested with https://github.com/scenaristeur/igora

With that fix , i don't have the

assert generation is not None
AssertionError

but the agent does not output the result yet

RyanFaulknerXYZ commented 6 months ago

I have also come across this issue when trying to use an Azure LLM instance that cannot stream as it sits behind an application gateway, looking forward to a resolution. @ppk5 I assume from your message above the fix will rely on an upstream langchain fix to be in place first, thank you for being proactive and raising the issue with them. If anyone has figured out a workaround meanwhile to get CrewAi working in this situation please let us know :)

alexjst commented 5 months ago

Still hitting this issue as of the end of May 2024.

github-actions[bot] commented 2 months ago

This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

github-actions[bot] commented 2 months ago

This issue was closed because it has been stalled for 5 days with no activity.