Open alphaveneno opened 1 month ago
Those examples are old and do not reflect how to properly setup Crewai, and are in the process of being updated
The issue is that the Task creation in the example is missing the expected_output parameter which is a requirement and is why you see this error message.
See below for how to add the parameter
syntax_review_task = Task(
description=f"""
Use the markdown_validation_tool to review
the file(s) at this path: {filename}
Be sure to pass only the file path to the markdown_validation_tool.
Use the following format to call the markdown_validation_tool:
Do I need to use a tool? Yes
Action: markdown_validation_tool
Action Input: {filename}
Get the validation results from the tool
and then summarize it into a list of changes
the developer should make to the document.
DO NOT recommend ways to update the document.
DO NOT change any of the content of the document or
add content to it. It is critical to your task to
only respond with a list of changes.
If you already know the answer or if you do not need
to use a tool, return it as your Final Answer.""",
expected_output="""A detailed list of changes the developer should make to the document based on the markdown validation results. The list should be actionable and specific, without recommending ways to update the document or changing its content.""",
agent=general_agent
)
@theCyberTech, Thank you for the prompt reply.
should this line of code (which now gives an error):
_updated_markdown = syntax_reviewtask.execute()
now be changed to this:
_updated_markdown = syntax_review_task.executesync()
... or something else.
I have corrected and re-factored the code, and added a feature, for main.py for the 'markdown-validator' example. A pull request on the entire crewAi-examples repository seems like overkill for a single small file from one small example. I will post my code in the Issues section of the crewAi-examples repository; it can be reviewed and critiqued there.
I suspect this is a pydantic bug, and not a primary crewai issue; however since crewai is tightly-bound to pydantic it is worth discussing here.
My bug report to Pydantic comes from code in a type-checking tutorial, not from any AI code: https://github.com/pydantic/pydantic/issues/9928
The error with crewai (below) and the error in my bug report are very similar.
This traceback is from the 'markdown-validation' crewai tutorial. The area of interest is in bold, at the bottom of the traceback:
Traceback (most recent call last): File "/home/ra/Documents/webCourses/TDD/crew_ai/crew_ai4/crew-ai-local-llm-main/crewai-simple-example/main.py", line 92, in
processed_document = process_markdown_document(filename)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ra/Documents/webCourses/TDD/crew_ai/crew_ai4/crew-ai-local-llm-main/crewai-simple-example/main.py", line 58, in process_markdown_document
syntax_review_task = Task(description=f"""
^^^^^^^^^^^^^^^^^^^^^
File "/home/ra/.cache/pypoetry/virtualenvs/markdown-validation-crew-Jb9dOYKQ-py3.11/lib/python3.11/site-packages/crewai/task.py", line 117, in init
super().init(config, data)
File "/home/ra/.cache/pypoetry/virtualenvs/markdown-validation-crew-Jb9dOYKQ-py3.11/lib/python3.11/site-packages/pydantic/main.py", line 196, in init
self.__pydantic_validator__.validate_python(data, self_instance=self)
pydantic_core._pydantic_core.ValidationError: 1 validation error for Task
expected_output
Field required [type=missing, input_value={'description': '\n\t\t\t... and actionable tasks.)}, input_type=dict] For further information visit https://errors.pydantic.dev/2.8/v/missing
The type for 'expected_output' is defined by crewai here, in the poetry virtual environment:
/home/ra/.cache/pypoetry/virtualenvs/markdown-validation-crew-Jb9dOYKQ-py3.11/lib/python3.11/site-packages/crewai/task.py
Pydantic seems to have developed an issue with two Field attributes: 'alias' and 'validation_alias'. 'serialization_alias' is okay.
My particulars (pyproject.toml with dependencies):
My system: Debian 12 Linux "bookworm" VS Code package manager: poetry 1.8.3
Is 'expected_output' used as an alias, and is anyone else experiencing this?