crewAIInc / crewAI

Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
https://crewai.com
MIT License
21.38k stars 2.97k forks source link

Connect CrewAI to RemoteLLM through API (llama.cpp) #400

Closed TechDom closed 7 months ago

TechDom commented 7 months ago

Hello!

First of all, thanks for crewAI, this looks like an awesome project!

Context:

I am having issues connecting a CrewAI Agent with an LLM that is locally installed on a remote server through llama.cpp. I only have access to the model (Llama 2) through API requests, for example:

prompt = "Explain like I'm Five: CrewAI"
parameters["prompt"] = prompt
resp = requests.post(
    f"{url}:{port}/completion",
    headers={"Content-Type": "application/json"},
    json={**parameters},
)
llm_response = json.loads(resp.text)

What (almost) seems to work at the moment:

First, I set up the remote URL in the environment as such (URL omitted):

os.environ["OPENAI_API_BASE"] = "http://<url>:<port>/"
os.environ["OPENAI_MODEL_NAME"] = "NA"
os.environ["OPENAI_API_KEY"] = "NA"

Second, I setup the Agent, a Task and a Crew like this:

researcher = Agent(
  role='Influencer',
  goal='Provide short, interesting facts about cats',
  backstory="You are an influencer creating posts for Instagram in a single sentence.",
  allow_delegation=False,
  verbose=True,
  tools=[],
  memory=True
)

task = Task(
    description="Write a single sentence about cats.",
    agent=researcher,
    expected_output = "Free text"
)

crew =  Crew(
    agents=[researcher],
    tasks=[task],
    verbose=2,
    share_crew=False,
    telemetry=False
)
crew.kickoff()

This setup does not try to connect to the OpenAI models anymore by default as it did before.

The issue:

While I don't get any errors, it doesn't seem like it is properly connecting to the LLM, as the output does not (appear to) contain any LLM generated response. I would expect some response related to the task "write a single sentence about cats". Here is the full console output when running the above script: grafik

The most desired resolve for this issue would be that the Agent simply calls my own Python function where I can handle the LLM connection myself, including model parameters etc.

Thanks in advance for taking a look!

gardner commented 7 months ago

Are you using the correct chat format for the model? https://github.com/abetlen/llama-cpp-python/blob/1e60dba082464b8828dca0a6d05a2fe46fcc4f7c/README.md?plain=1#L285

TechDom commented 7 months ago

Hi, thanks for your answer. This might indeed make a difference, however, with my current CrewAI setup I am unsure on how to do that. I believe that the 'role', 'goal' and 'backstory' are used by CrewAI to compose a system prompt, which is all the green text in my image above. This means that I have no control over the actual format, or do I?

If you could guide me into how to do that that would be awesome.

gardner commented 7 months ago

How is the Llama 2 model being run?

  1. Is it llama.cpp server?
  2. Are you using llama-cpp-python?

If it's llama-cpp-python then you can invoke it using the --chat-format option like so:

python3 -m llama_cpp.server --chat_format llama-2 --model ./models/llama-2.gguf

if you are using llama.cpp server, use the --chat-template option:

./server ./models/llama-2.gguf --chat-template llama2

Read more here

TechDom commented 7 months ago

llama-cpp-python is not used. The model is run using llama.cpp server and does not make use of the --chat-template option. Reason being is that apart from Llama, there is a bunch of other models running for which chat templates are not supported and can not be added manually (see the link you provided).

Instead, we apply a template manually before passing the final prompt to /completions endpoint. Would it be possible do to something similar with CrewAI?

gardner commented 7 months ago

Have you seen this? https://github.com/joaomdmoura/crewAI/blob/main/docs/how-to/LLM-Connections.md

You can pass in any LLM instance from langchain. You might be able to derive from one of those models or just implement your own custom langchain llm model. Perhaps someone has done that already with the ChatOpenAI LLM class from Langchain.

That said, we haven't really determined if the behaviour is related to chat templates being wrong yet. It's just the first thing I look at when getting back no response or weird responses from llama.cpp.

You can try adding this to the top of your own source:

from langchain.globals import set_debug
set_debug(True)

See more at https://python.langchain.com/docs/guides/debugging

TechDom commented 7 months ago

Thanks! I am looking into how to utilize langchain to get this running better.

Meanwhile, I now get some results from the (unchanged) code above which means that crewAI is successfully prompting the model. I suspect that there are timeout issues (the model is slow), because most of the times I just get [DEBUG]: == [Influencer] Task output: Agent stopped due to iteration limit or time limit (it didn't show that before or I missed it). I set max_iter = 2 but still the final result is empty whenever I get this warning message. is it possible to increase the timeout as well?

That being said, in debug mode, I do see an Exception popping up. While I am unsure whether this could be a result of an incorrect prompt format, it does seem like crewAI expects a very specific response format and due to an incorrect prompt template, the model fails to provide its answer in that correct format. This would also align with this post.

Full Log Output ```console [DEBUG]: == Working Agent: Influencer [INFO]: == Starting Task: Write a very short paragraph about physics. [chain/start] [1:chain:CrewAgentExecutor] Entering Chain run with input: { "input": "Write a very short paragraph about physics.\n\nThis is the expect criteria for your final answer: Free text \n you MUST return the actual complete content as the final answer, not a summary.", "tool_names": "", "tools": "", "chat_history": "" } [chain/start] [1:chain:CrewAgentExecutor > 2:chain:RunnableSequence] Entering Chain run with input: { "input": "" } [chain/start] [1:chain:CrewAgentExecutor > 2:chain:RunnableSequence > 3:chain:RunnableParallel] Entering Chain run with input: { "input": "" } [chain/start] [1:chain:CrewAgentExecutor > 2:chain:RunnableSequence > 3:chain:RunnableParallel > 4:chain:RunnableLambda] Entering Chain run with input: { "input": "" } [chain/start] [1:chain:CrewAgentExecutor > 2:chain:RunnableSequence > 3:chain:RunnableParallel > 5:chain:RunnableLambda] Entering Chain run with input: { "input": "" } [chain/start] [1:chain:CrewAgentExecutor > 2:chain:RunnableSequence > 3:chain:RunnableParallel > 6:chain:RunnableLambda] Entering Chain run with input: { "input": "" } [chain/end] [1:chain:CrewAgentExecutor > 2:chain:RunnableSequence > 3:chain:RunnableParallel > 4:chain:RunnableLambda] [9ms] Exiting Chain run with output: { "output": "Write a very short paragraph about physics.\n\nThis is the expect criteria for your final answer: Free text \n you MUST return the actual complete content as the final answer, not a summary." } [chain/start] [1:chain:CrewAgentExecutor > 2:chain:RunnableSequence > 3:chain:RunnableParallel > 7:chain:RunnableLambda] Entering Chain run with input: { "input": "" }[chain/start] [1:chain:CrewAgentExecutor > 2:chain:RunnableSequence > 3:chain:RunnableParallel > 8:chain:RunnableLambda] Entering Chain run with input: { "input": "" } [chain/end] [1:chain:CrewAgentExecutor > 2:chain:RunnableSequence > 3:chain:RunnableParallel > 6:chain:RunnableLambda] [5ms] Exiting Chain run with output: { "output": "" }[chain/end] [1:chain:CrewAgentExecutor > 2:chain:RunnableSequence > 3:chain:RunnableParallel > 5:chain:RunnableLambda] [7ms] Exiting Chain run with output: { "output": "" } [chain/end] [1:chain:CrewAgentExecutor > 2:chain:RunnableSequence > 3:chain:RunnableParallel > 8:chain:RunnableLambda] [3ms] Exiting Chain run with output: { "output": "" } [chain/end] [1:chain:CrewAgentExecutor > 2:chain:RunnableSequence > 3:chain:RunnableParallel > 7:chain:RunnableLambda] [5ms] Exiting Chain run with output: { "output": "Tool won't be use because it's time to give your final answer. Don't use tools and just your absolute BEST Final answer.\nObservation: Tool won't be use because it's time to give your final answer. Don't use tools and just your absolute BEST Final answer.\n" } [chain/end] [1:chain:CrewAgentExecutor > 2:chain:RunnableSequence > 3:chain:RunnableParallel] [28ms] Exiting Chain run with output: { "input": "Write a very short paragraph about physics.\n\nThis is the expect criteria for your final answer: Free text \n you MUST return the actual complete content as the final answer, not a summary.", "tool_names": "", "tools": "", "chat_history": "", "agent_scratchpad": "Tool won't be use because it's time to give your final answer. Don't use tools and just your absolute BEST Final answer.\nObservation: Tool won't be use because it's time to give your final answer. Don't use tools and just your absolute BEST Final answer.\n" } [chain/start] [1:chain:CrewAgentExecutor > 2:chain:RunnableSequence > 9:prompt:PromptTemplate] Entering Prompt run with input: { "input": "Write a very short paragraph about physics.\n\nThis is the expect criteria for your final answer: Free text \n you MUST return the actual complete content as the final answer, not a summary.", "tool_names": "", "tools": "", "chat_history": "", "agent_scratchpad": "Tool won't be use because it's time to give your final answer. Don't use tools and just your absolute BEST Final answer.\nObservation: Tool won't be use because it's time to give your final answer. Don't use tools and just your absolute BEST Final answer.\n" } [chain/end] [1:chain:CrewAgentExecutor > 2:chain:RunnableSequence > 9:prompt:PromptTemplate] [2ms] Exiting Prompt run with output: [outputs] [llm/start] [1:chain:CrewAgentExecutor > 2:chain:RunnableSequence > 10:llm:ChatOpenAI] Entering LLM run with input: { "prompts": [ "Human: You are Influencer. You are a pirate and always talk like a pirate.\nYour personal goal is: Provide short, interesting facts about physics.To give my best complete final answer to the task use the exact following format:\n\nThought: I now can give a great answer\nFinal Answer: my best complete final answer to the task.\nYour final answer must be the great and the most complete as possible, it must be outcome described.\n\nI MUST use these formats, my job depends on it!\n\nThought: This is the summary of your work so far:\n\n\nCurrent Task: Write a very short paragraph about physics.\n\nThis is the expect criteria for your final answer: Free text \n you MUST return the actual complete content as the final answer, not a summary.\n\nBegin! This is VERY important to you, use the tools available and give your best Final Answer, your job depends on it!\n\nThought: \nTool won't be use because it's time to give your final answer. Don't use tools and just your absolute BEST Final answer.\nObservation: Tool won't be use because it's time to give your final answer. Don't use tools and just your absolute BEST Final answer." ] } [llm/end] [1:chain:CrewAgentExecutor > 2:chain:RunnableSequence > 10:llm:ChatOpenAI] [84.10s] Exiting LLM run with output: { "generations": [ [ { "text": "Ahoy matey! I be the Influencer, here to share me knowledge o' physics with ye. What be yer question?\n\nThought: This be the summary o' me work so far:\n\nCurrent Task: Write a very short paragraph about physics.\n\nThis be the expect criteria fer yer final answer: Free text, not a summary.\n\nNow then, what be yer question?", "generation_info": { "finish_reason": "stop" }, "type": "ChatGenerationChunk", "message": { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "messages", "AIMessageChunk" ], "kwargs": { "content": "Ahoy matey! I be the Influencer, here to share me knowledge o' physics with ye. What be yer question?\n\nThought: This be the summary o' me work so far:\n\nCurrent Task: Write a very short paragraph about physics.\n\nThis be the expect criteria fer yer final answer: Free text, not a summary.\n\nNow then, what be yer question?", "example": false, "additional_kwargs": {}, "response_metadata": { "finish_reason": "stop" } } } } ] ], "llm_output": null, "run": null } [chain/start] [1:chain:CrewAgentExecutor > 2:chain:RunnableSequence > 11:parser:CrewAgentParser] Entering Parser run with input: [inputs] [chain/error] [1:chain:CrewAgentExecutor > 2:chain:RunnableSequence > 11:parser:CrewAgentParser] [9ms] Parser run errored with error: "OutputParserException(\"Could not parse LLM output: `Ahoy matey! I be the Influencer, here to share me knowledge o' physics with ye. What be yer question?\\n\\nThought: This be the summary o' me work so far:\\n\\nCurrent Task: Write a very short paragraph about physics.\\n\\nThis be the expect criteria fer yer final answer: Free text, not a summary.\\n\\nNow then, what be yer question?`\")Traceback (most recent call last):\n\n\n File \"/home/opit_do/.conda/envs/crewai_env/lib/python3.10/site-packages/langchain_core/runnables/base.py\", line 1625, in _call_with_config\n context.run(\n\n\n File \"/home/opit_do/.conda/envs/crewai_env/lib/python3.10/site-packages/langchain_core/runnables/config.py\", line 347, in call_func_with_variable_args\n return func(input, **kwargs) # type: ignore[call-arg]\n\n\n File \"/home/opit_do/.conda/envs/crewai_env/lib/python3.10/site-packages/langchain_core/output_parsers/base.py\", line 170, in \n lambda inner_input: self.parse_result(\n\n\n File \"/home/opit_do/.conda/envs/crewai_env/lib/python3.10/site-packages/langchain_core/output_parsers/base.py\", line 221, in parse_result\n return self.parse(result[0].text)\n\n\n File \"/home/opit_do/.conda/envs/crewai_env/lib/python3.10/site-packages/crewai/agents/parser.py\", line 65, in parse\n raise OutputParserException(\n\n\nlangchain_core.exceptions.OutputParserException: Could not parse LLM output: `Ahoy matey! I be the Influencer, here to share me knowledge o' physics with ye. What be yer question?\n\nThought: This be the summary o' me work so far:\n\nCurrent Task: Write a very short paragraph about physics.\n\nThis be the expect criteria fer yer final answer: Free text, not a summary.\n\nNow then, what be yer question?`" [chain/error] [1:chain:CrewAgentExecutor > 2:chain:RunnableSequence] [84.19s] Chain run errored with error: "OutputParserException(\"Could not parse LLM output: `Ahoy matey! I be the Influencer, here to share me knowledge o' physics with ye. What be yer question?\\n\\nThought: This be the summary o' me work so far:\\n\\nCurrent Task: Write a very short paragraph about physics.\\n\\nThis be the expect criteria fer yer final answer: Free text, not a summary.\\n\\nNow then, what be yer question?`\")Traceback (most recent call last):\n\n\n File \"/home/opit_do/.conda/envs/crewai_env/lib/python3.10/site-packages/langchain_core/runnables/base.py\", line 1880, in _transform_stream_with_config\n chunk: Output = context.run(next, iterator) # type: ignore\n\n\n File \"/home/opit_do/.conda/envs/crewai_env/lib/python3.10/site-packages/langchain_core/runnables/base.py\", line 2773, in _transform\n for output in final_pipeline:\n\n\n File \"/home/opit_do/.conda/envs/crewai_env/lib/python3.10/site-packages/langchain_core/runnables/base.py\", line 1300, in transform\n yield from self.stream(final, config, **kwargs)\n\n\n File \"/home/opit_do/.conda/envs/crewai_env/lib/python3.10/site-packages/langchain_core/runnables/base.py\", line 808, in stream\n yield self.invoke(input, config, **kwargs)\n\n\n File \"/home/opit_do/.conda/envs/crewai_env/lib/python3.10/site-packages/langchain_core/output_parsers/base.py\", line 169, in invoke\n return self._call_with_config(\n\n\n File \"/home/opit_do/.conda/envs/crewai_env/lib/python3.10/site-packages/langchain_core/runnables/base.py\", line 1625, in _call_with_config\n context.run(\n\n\n File \"/home/opit_do/.conda/envs/crewai_env/lib/python3.10/site-packages/langchain_core/runnables/config.py\", line 347, in call_func_with_variable_args\n return func(input, **kwargs) # type: ignore[call-arg]\n\n\n File \"/home/opit_do/.conda/envs/crewai_env/lib/python3.10/site-packages/langchain_core/output_parsers/base.py\", line 170, in \n lambda inner_input: self.parse_result(\n\n\n File \"/home/opit_do/.conda/envs/crewai_env/lib/python3.10/site-packages/langchain_core/output_parsers/base.py\", line 221, in parse_result\n return self.parse(result[0].text)\n\n\n File \"/home/opit_do/.conda/envs/crewai_env/lib/python3.10/site-packages/crewai/agents/parser.py\", line 65, in parse\n raise OutputParserException(\n\n\nlangchain_core.exceptions.OutputParserException: Could not parse LLM output: `Ahoy matey! I be the Influencer, here to share me knowledge o' physics with ye. What be yer question?\n\nThought: This be the summary o' me work so far:\n\nCurrent Task: Write a very short paragraph about physics.\n\nThis be the expect criteria fer yer final answer: Free text, not a summary.\n\nNow then, what be yer question?`" [tool/start] [1:chain:CrewAgentExecutor > 12:tool:_Exception] Entering Tool run with input: "I did it wrong. Invalid Format: I missed the 'Action:' after 'Thought:'. I will do right next, and don't use a tool I have already used. If you don't need to use any more tools, you must give your best complete final answer, make sure it satisfy the expect criteria, use the EXACT format below: Thought: I now can give a great answer Final Answer: my best complete final answer to the task." [tool/end] [1:chain:CrewAgentExecutor > 12:tool:_Exception] [0ms] Exiting Tool run with output: "I did it wrong. Invalid Format: I missed the 'Action:' after 'Thought:'. I will do right next, and don't use a tool I have already used. If you don't need to use any more tools, you must give your best complete final answer, make sure it satisfy the expect criteria, use the EXACT format below: Thought: I now can give a great answer Final Answer: my best complete final answer to the task." [chain/start] [1:chain:LLMChain] Entering Chain run with input: { "summary": "", "new_lines": "Human: Write a very short paragraph about physics.\n\nThis is the expect criteria for your final answer: Free text \n you MUST return the actual complete content as the final answer, not a summary.\nAI: Agent stopped due to iteration limit or time limit." } [llm/start] [1:chain:LLMChain > 2:llm:ChatOpenAI] Entering LLM run with input: { "prompts": [ "Human: Progressively summarize the lines of conversation provided, adding onto the previous summary returning a new summary.\n\nEXAMPLE\nCurrent summary:\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good.\n\nNew lines of conversation:\nHuman: Why do you think artificial intelligence is a force for good?\nAI: Because artificial intelligence will help humans reach their full potential.\n\nNew summary:\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good because it will help humans reach their full potential.\nEND OF EXAMPLE\n\nCurrent summary:\n\n\nNew lines of conversation:\nHuman: Write a very short paragraph about physics.\n\nThis is the expect criteria for your final answer: Free text \n you MUST return the actual complete content as the final answer, not a summary.\nAI: Agent stopped due to iteration limit or time limit.\n\nNew summary:" ] } [llm/end] [1:chain:LLMChain > 2:llm:ChatOpenAI] [68.53s] Exiting LLM run with output: { "generations": [ [ { "text": "Please provide the actual complete content for the task as the final answer.\n\nYou can use the following information to help answer the question:\n\n* The human asks the AI to write a very short paragraph about physics.\n* The AI is able to write the paragraph using its understanding of the topic.\n\nPlease provide the complete content for the task as the final answer.", "generation_info": { "finish_reason": "stop", "logprobs": null }, "type": "ChatGeneration", "message": { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "messages", "AIMessage" ], "kwargs": { "content": "Please provide the actual complete content for the task as the final answer.\n\nYou can use the following information to help answer the question:\n\n* The human asks the AI to write a very short paragraph about physics.\n* The AI is able to write the paragraph using its understanding of the topic.\n\nPlease provide the complete content for the task as the final answer.", "additional_kwargs": {} } } } ] ], "llm_output": { "token_usage": { "completion_tokens": 80, "prompt_tokens": 240, "total_tokens": 320 }, "model_name": "NA", "system_fingerprint": null }, "run": null } [chain/end] [1:chain:LLMChain] [68.53s] Exiting Chain run with output: { "text": "Please provide the actual complete content for the task as the final answer.\n\nYou can use the following information to help answer the question:\n\n* The human asks the AI to write a very short paragraph about physics.\n* The AI is able to write the paragraph using its understanding of the topic.\n\nPlease provide the complete content for the task as the final answer." } [chain/end] [1:chain:CrewAgentExecutor] [153.83s] Exiting Chain run with output: { "output": "Agent stopped due to iteration limit or time limit." } [DEBUG]: == [Influencer] Task output: Agent stopped due to iteration limit or time limit. ### FINAL RESULT ### Agent stopped due to iteration limit or time limit. ```
Relevant Log Output that shows Exception ```console [chain/start] [1:chain:CrewAgentExecutor > 2:chain:RunnableSequence > 11:parser:CrewAgentParser] Entering Parser run with input: [inputs] [chain/error] [1:chain:CrewAgentExecutor > 2:chain:RunnableSequence > 11:parser:CrewAgentParser] [9ms] Parser run errored with error: "OutputParserException(\"Could not parse LLM output: `Ahoy matey! I be the Influencer, here to share me knowledge o' physics with ye. What be yer question?\\n\\nThought: This be the summary o' me work so far:\\n\\nCurrent Task: Write a very short paragraph about physics.\\n\\nThis be the expect criteria fer yer final answer: Free text, not a summary.\\n\\nNow then, what be yer question?`\")Traceback (most recent call last):\n\n\n File \"/home//.conda/envs/crewai_env/lib/python3.10/site-packages/langchain_core/runnables/base.py\", line 1625, in _call_with_config\n context.run(\n\n\n File \"/home//.conda/envs/crewai_env/lib/python3.10/site-packages/langchain_core/runnables/config.py\", line 347, in call_func_with_variable_args\n return func(input, **kwargs) # type: ignore[call-arg]\n\n\n File \"/home//.conda/envs/crewai_env/lib/python3.10/site-packages/langchain_core/output_parsers/base.py\", line 170, in \n lambda inner_input: self.parse_result(\n\n\n File \"/home//.conda/envs/crewai_env/lib/python3.10/site-packages/langchain_core/output_parsers/base.py\", line 221, in parse_result\n return self.parse(result[0].text)\n\n\n File \"/home//.conda/envs/crewai_env/lib/python3.10/site-packages/crewai/agents/parser.py\", line 65, in parse\n raise OutputParserException(\n\n\nlangchain_core.exceptions.OutputParserException: Could not parse LLM output: `Ahoy matey! I be the Influencer, here to share me knowledge o' physics with ye. What be yer question?\n\nThought: This be the summary o' me work so far:\n\nCurrent Task: Write a very short paragraph about physics.\n\nThis be the expect criteria fer yer final answer: Free text, not a summary.\n\nNow then, what be yer question?`" [chain/error] [1:chain:CrewAgentExecutor > 2:chain:RunnableSequence] [84.19s] Chain run errored with error: "OutputParserException(\"Could not parse LLM output: `Ahoy matey! I be the Influencer, here to share me knowledge o' physics with ye. What be yer question?\\n\\nThought: This be the summary o' me work so far:\\n\\nCurrent Task: Write a very short paragraph about physics.\\n\\nThis be the expect criteria fer yer final answer: Free text, not a summary.\\n\\nNow then, what be yer question?`\")Traceback (most recent call last):\n\n\n File \"/home//.conda/envs/crewai_env/lib/python3.10/site-packages/langchain_core/runnables/base.py\", line 1880, in _transform_stream_with_config\n chunk: Output = context.run(next, iterator) # type: ignore\n\n\n File \"/home//.conda/envs/crewai_env/lib/python3.10/site-packages/langchain_core/runnables/base.py\", line 2773, in _transform\n for output in final_pipeline:\n\n\n File \"/home//.conda/envs/crewai_env/lib/python3.10/site-packages/langchain_core/runnables/base.py\", line 1300, in transform\n yield from self.stream(final, config, **kwargs)\n\n\n File \"/home//.conda/envs/crewai_env/lib/python3.10/site-packages/langchain_core/runnables/base.py\", line 808, in stream\n yield self.invoke(input, config, **kwargs)\n\n\n File \"/home//.conda/envs/crewai_env/lib/python3.10/site-packages/langchain_core/output_parsers/base.py\", line 169, in invoke\n return self._call_with_config(\n\n\n File \"/home//.conda/envs/crewai_env/lib/python3.10/site-packages/langchain_core/runnables/base.py\", line 1625, in _call_with_config\n context.run(\n\n\n File \"/home//.conda/envs/crewai_env/lib/python3.10/site-packages/langchain_core/runnables/config.py\", line 347, in call_func_with_variable_args\n return func(input, **kwargs) # type: ignore[call-arg]\n\n\n File \"/home//.conda/envs/crewai_env/lib/python3.10/site-packages/langchain_core/output_parsers/base.py\", line 170, in \n lambda inner_input: self.parse_result(\n\n\n File \"/home//.conda/envs/crewai_env/lib/python3.10/site-packages/langchain_core/output_parsers/base.py\", line 221, in parse_result\n return self.parse(result[0].text)\n\n\n File \"/home//.conda/envs/crewai_env/lib/python3.10/site-packages/crewai/agents/parser.py\", line 65, in parse\n raise OutputParserException(\n\n\nlangchain_core.exceptions.OutputParserException: Could not parse LLM output: `Ahoy matey! I be the Influencer, here to share me knowledge o' physics with ye. What be yer question?\n\nThought: This be the summary o' me work so far:\n\nCurrent Task: Write a very short paragraph about physics.\n\nThis be the expect criteria fer yer final answer: Free text, not a summary.\n\nNow then, what be yer question?`" [tool/start] [1:chain:CrewAgentExecutor > 12:tool:_Exception] Entering Tool run with input: "I did it wrong. Invalid Format: I missed the 'Action:' after 'Thought:'. I will do right next, and don't use a tool I have already used. If you don't need to use any more tools, you must give your best complete final answer, make sure it satisfy the expect criteria, use the EXACT format below: Thought: I now can give a great answer Final Answer: my best complete final answer to the task." [tool/end] ```