crewAIInc / crewAI

Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
https://crewai.com
MIT License
18.65k stars 2.57k forks source link

Using openAI chat.completion as a tool raises openai.APIError: The model produced invalid content. Consider modifying your prompt if you are seeing this error persistently #806

Open lapups opened 2 months ago

lapups commented 2 months ago

I am trying to use small summarizer as a tool for my task. But at the > Entering new CrewAgentExecutor chain...

I am getting provided error 8 of 10 times (which is also weird):

Traceback (most recent call last):
  File "/home/lapups/pycharmProjects/LAWrence-PoC/code/lawrence_poc/BSP/test.py", line 85, in <module>
    result = legal_crew.run()
  File "/home/lapups/pycharmProjects/LAWrence-PoC/code/lawrence_poc/BSP/Crews/LegalCrews.py", line 42, in run
    result = crew.kickoff()
  File "/home/lapups/pycharmProjects/LAWrence-PoC/code/lawrence_poc/.venv/lib/python3.10/site-packages/crewai/crew.py", line 264, in kickoff
    result = self._run_sequential_process()
  File "/home/lapups/pycharmProjects/LAWrence-PoC/code/lawrence_poc/.venv/lib/python3.10/site-packages/crewai/crew.py", line 305, in _run_sequential_process
    output = task.execute(context=task_output)
  File "/home/lapups/pycharmProjects/LAWrence-PoC/code/lawrence_poc/.venv/lib/python3.10/site-packages/crewai/task.py", line 183, in execute
    result = self._execute(
  File "/home/lapups/pycharmProjects/LAWrence-PoC/code/lawrence_poc/.venv/lib/python3.10/site-packages/crewai/task.py", line 192, in _execute
    result = agent.execute_task(
  File "/home/lapups/pycharmProjects/LAWrence-PoC/code/lawrence_poc/.venv/lib/python3.10/site-packages/crewai/agent.py", line 236, in execute_task
    result = self.agent_executor.invoke(
  File "/home/lapups/pycharmProjects/LAWrence-PoC/code/lawrence_poc/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 163, in invoke
    raise e
  File "/home/lapups/pycharmProjects/LAWrence-PoC/code/lawrence_poc/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 153, in invoke
    self._call(inputs, run_manager=run_manager)
  File "/home/lapups/pycharmProjects/LAWrence-PoC/code/lawrence_poc/.venv/lib/python3.10/site-packages/crewai/agents/executor.py", line 128, in _call
    next_step_output = self._take_next_step(
  File "/home/lapups/pycharmProjects/LAWrence-PoC/code/lawrence_poc/.venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1138, in _take_next_step
    [
  File "/home/lapups/pycharmProjects/LAWrence-PoC/code/lawrence_poc/.venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1138, in <listcomp>
    [
  File "/home/lapups/pycharmProjects/LAWrence-PoC/code/lawrence_poc/.venv/lib/python3.10/site-packages/crewai/agents/executor.py", line 192, in _iter_next_step
    output = self.agent.plan(  # type: ignore #  Incompatible types in assignment (expression has type "AgentAction | AgentFinish | list[AgentAction]", variable has type "AgentAction")
  File "/home/lapups/pycharmProjects/LAWrence-PoC/code/lawrence_poc/.venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 397, in plan
    for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
  File "/home/lapups/pycharmProjects/LAWrence-PoC/code/lawrence_poc/.venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2875, in stream
    yield from self.transform(iter([input]), config, **kwargs)
  File "/home/lapups/pycharmProjects/LAWrence-PoC/code/lawrence_poc/.venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2862, in transform
    yield from self._transform_stream_with_config(
  File "/home/lapups/pycharmProjects/LAWrence-PoC/code/lawrence_poc/.venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1881, in _transform_stream_with_config
    chunk: Output = context.run(next, iterator)  # type: ignore
  File "/home/lapups/pycharmProjects/LAWrence-PoC/code/lawrence_poc/.venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2826, in _transform
    for output in final_pipeline:
  File "/home/lapups/pycharmProjects/LAWrence-PoC/code/lawrence_poc/.venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1282, in transform
    for ichunk in input:
  File "/home/lapups/pycharmProjects/LAWrence-PoC/code/lawrence_poc/.venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 4736, in transform
    yield from self.bound.transform(
  File "/home/lapups/pycharmProjects/LAWrence-PoC/code/lawrence_poc/.venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1300, in transform
    yield from self.stream(final, config, **kwargs)
  File "/home/lapups/pycharmProjects/LAWrence-PoC/code/lawrence_poc/.venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 249, in stream
    raise e
  File "/home/lapups/pycharmProjects/LAWrence-PoC/code/lawrence_poc/.venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 229, in stream
    for chunk in self._stream(messages, stop=stop, **kwargs):
  File "/home/lapups/pycharmProjects/LAWrence-PoC/code/lawrence_poc/.venv/lib/python3.10/site-packages/langchain_openai/chat_models/base.py", line 408, in _stream
    for chunk in self.client.create(messages=message_dicts, **params):
  File "/home/lapups/pycharmProjects/LAWrence-PoC/code/lawrence_poc/.venv/lib/python3.10/site-packages/openai/_streaming.py", line 46, in __iter__
    for item in self._iterator:
  File "/home/lapups/pycharmProjects/LAWrence-PoC/code/lawrence_poc/.venv/lib/python3.10/site-packages/openai/_streaming.py", line 72, in __stream__
    raise APIError(
openai.APIError: The model produced invalid content. Consider modifying your prompt if you are seeing this error persistently.

Upade using the Claude 3 models as a tool also raises the same error

greg80303 commented 2 months ago

I have also started seeing this error a lot. I've been running my crews for several months now and have never seen this until a couple of days ago.

There is an OpenAI forum post that talks about using tools correctly with the API. I wonder if there is some problem with either CrewAI or LangChain in how they are passing tool information to OpenAI.

MichaelBrowning commented 2 months ago

I will add that this started coming back as an error often during the last 24 hours. Probably 50% of complex runs.

smaldd14 commented 2 months ago

Seeing this same issue as well, using

llm = ChatOpenAI(
    model="gpt-4o",
    api_key=os.getenv("OPENAI_API_KEY")
)

It seems like gpt-4o is more strict regarding function calling. Could it be how crewai is passing function to the API?

greg80303 commented 2 months ago

Possibly -- but I've been using gpt4o for several weeks now. The issues just started in the last 24-48hrs. Possible that OpenAI changed something on their end as well

smaldd14 commented 2 months ago

@greg80303 Yeah it's finicky. Just started using crewAI, so I'm new to the way things work. But, from my tests, it seems like I get the above error ~40-50% of the time

graindorgeanthony commented 2 months ago

Same here!

chrisyang-menos commented 2 months ago

Same here. I have a feature using langchain react agent (with tools) + gpt-4o. From my LangSmith I started to see this error since ~Jun 23rd.

Karl60 commented 2 months ago

I have a sequence of 3 seemingly simple agents and 3 seemingly simple tasks with one agent per task. The program seems to randomly fail on different agents in the sequence on different runs with:

openai.APIError: The model produced invalid content. Consider modifying your prompt if you are seeing this error persistently.

Update: I consistently get this error when using the agents with model 'gpt-4o'. However, changing the agent's model to 'gpt-3.5-turbo' all works well. And progresses through the sequence much faster.

grpaiva commented 2 months ago

Same here.

chrisyang-menos commented 2 months ago

More observations here - might be related to openai (4o ) failed to extract the correct parameters.

https://github.com/langchain-ai/langchain/issues/23407

krishnakumar18 commented 1 month ago

Guys, did anyone find how to solve this issue. I'm unable to proceed without getting this issue resolved. Any help is much appreciated. Thanks in advance!

dbalint7 commented 1 month ago

@Karl60

Update: I consistently get this error when using the agents with model 'gpt-4o'. However, changing the agent's model to 'gpt-3.5-turbo' all works well. And progresses through the sequence much faster.

Same for me. Switching from 'gpt-4o' to 'gpt-3.5-turbo' at least gives a temporary workaround to resolve the issue, but I haven't been able to figure out specifically why 'gpt-4o' isn't working.

Anindyadeep commented 3 weeks ago

Hello, guys. Is there any update on this issue? Because I have also faced the same issue.

mboarettoACT commented 3 weeks ago

I am facing the same issue, have you guys tried gpt-4 or gpt-4o-mini as well-?

Anindyadeep commented 3 weeks ago

I am facing the same issue, have you guys tried gpt-4 or gpt-4o-mini as well-?

Yess

carvalhorafael commented 2 weeks ago

Same issue here too.

carvalhorafael commented 2 weeks ago

One piece of information that may be useful.

I changed the model from gpt-4o to gpt-4o-mini and the error no longer occurs.