camel-ai / camel

🐫 CAMEL: Finding the Scaling Law of Agents. A multi-agent framework. https://www.camel-ai.org
https://www.camel-ai.org
Apache License 2.0
5.37k stars 659 forks source link

[BUG] AttributeError: 'str' object has no attribute 'supports_tool_calling' #977

Open AbdullahMushtaq78 opened 6 days ago

AbdullahMushtaq78 commented 6 days ago

Required prerequisites

What version of camel are you using?

0.2.1a

System information

3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] linux 0.2.1a

Problem description

I am following the hackathon_judgertrying example and changing it a bit for my own use by using Llama 3.1 8B as the coordinator agent and task agent in the Workforce using Ollama instead of OpenAI's models which are being used by default.

On calling the workforce.process_task(task) it is throwing this error in chat_agent.py file:

Traceback (most recent call last):
  File "/media/adnan/New Volume/Abdullah/MultiAgentLLMs/essay_judge.py", line 185, in <module>
    main()
  File "/media/adnan/New Volume/Abdullah/MultiAgentLLMs/essay_judge.py", line 181, in main
    result = workforce.process_task(task)
  File "/media/adnan/New Volume/Abdullah/camel/camel/workforce/utils.py", line 63, in wrapper
    return func(self, *args, **kwargs)
  File "/media/adnan/New Volume/Abdullah/camel/camel/workforce/workforce.py", line 153, in process_task
    asyncio.run(self.start())
  File "/home/adnan/anaconda3/envs/camel/lib/python3.10/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/home/adnan/anaconda3/envs/camel/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
    return future.result()
  File "/media/adnan/New Volume/Abdullah/camel/camel/workforce/workforce.py", line 475, in start
    await self._listen_to_channel()
  File "/media/adnan/New Volume/Abdullah/camel/camel/workforce/workforce.py", line 443, in _listen_to_channel
    await self._post_ready_tasks()
  File "/media/adnan/New Volume/Abdullah/camel/camel/workforce/workforce.py", line 408, in _post_ready_tasks
    assignee_id = self._find_assignee(task=ready_task)
  File "/media/adnan/New Volume/Abdullah/camel/camel/workforce/workforce.py", line 289, in _find_assignee
    response = self.coordinator_agent.step(
  File "/media/adnan/New Volume/Abdullah/camel/camel/agents/chat_agent.py", line 511, in step
    and self.model_type.supports_tool_calling
AttributeError: 'str' object has no attribute 'supports_tool_calling'

I am new to using Camel AI and would appreciate any help in using open-source LLMs with camel ai. Throughout the framework and documentation, only API-based LLMs are preferred and there are no resources that we can use to work with open source models. Or is it just me who hasn't found any resources to solve this issue? Please let me know as soon as possible if anyone can help me fix this issue.

Reproducible example code

The Python snippets:

from optimized_loader import Dataset
from persona import *

import textwrap

from camel.agents import ChatAgent
from camel.messages import BaseMessage
from camel.models import ModelFactory
from camel.tasks import Task
from camel.types import ModelPlatformType, ModelType
from camel.workforce import Workforce

def create_judge_template(persona, example_feedback, criteria):
    msg_content_template = textwrap.dedent(
        f"""\
            You are a judge in a competition of writing Essays.
            This is your persona that you MUST act with: {persona}
            Here is an example that you might give with your persona, you MUST try your best to align with this:
            {example_feedback}
            When evaluating essays, you should consider the following criteria:
            {criteria}
            You also need to give score based on the criteria, from 1-6. The score given should be like 3/6, 5/6, 1/6, etc.

            Full text: {{full_text}}

            Assigned part: {{assigned_discourse}}

            Other parts: {{other_discourses}}
            """
    )

    model = ModelFactory.create(
        model_platform=ModelPlatformType.OLLAMA,
        model_type="llama3",
        url="http://localhost:11434/v1",
        model_config_dict={"temperature": 0, "tools": None},
    )

    return msg_content_template, model
def extract_discourses(essay, judge_role):
    discourse_parts = {key: val for key, val in essay.discourses.items() if val}

    assigned_discourse = discourse_parts.get(judge_role, "Not available")
    other_discourses = {k: v for k, v in discourse_parts.items() if k != judge_role}

    return assigned_discourse, other_discourses

def update_judge(agent, msg_template, full_text, assigned_discourse, other_discourses):
    updated_msg = msg_template.format(
        full_text=full_text,
        assigned_discourse=assigned_discourse,
        other_discourses=other_discourses
    )
    sys_msg = BaseMessage.make_assistant_message(
        role_name="Essay Judge",
        content=updated_msg
    )

    agent.system_message = sys_msg
def generate_task_content(essay):
    essay_info = (
        f"Evaluate the essay on topic '{essay.prompt_name}' written by a student in {essay.grade_level} Grade, "
        f"with ELL (English Language Learning) Status: {essay.ell_status}. "
    )

    task_content = (
        essay_info +
        "Each judge should evaluate the essay based on the specific criteria assigned to them. "
        "First, review the entire essay to understand its context. Then, each judge should focus on their designated section "
        "and give a score accordingly. Finally, list the opinions from each judge, making sure to preserve the unique identity "
        "of each judge, along with their score and name. "
        "Conclude with a final summary of the overall opinions and scores. "
        "Output should be structured as follows:\n\n"
        "<ScoresPerJudge>\n"
        "Position Paula (Judge): X/6\n"
        "Claim Clara (Judge): X/6\n"
        "Counterclaim Carl (Judge): X/6\n"
        "Rebuttal Robert (Judge): X/6\n"
        "Evidence Eva (Judge): X/6\n"
        "Summary Susan (Judge): X/6\n"
        "Unannotated Olivia (Judge): X/6\n"
        "</ScoresPerJudge>\n\n"
        "<OpinionsPerJudge>\n"
        "Position Paula (Judge): 'Opinion about position here'\n"
        "Claim Clara (Judge): 'Opinion about claim here'\n"
        "Counterclaim Carl (Judge): 'Opinion about counterclaim here'\n"
        "Rebuttal Robert (Judge): 'Opinion about rebuttal here'\n"
        "Evidence Eva (Judge): 'Opinion about evidence here'\n"
        "Summary Susan (Judge): 'Opinion about conclusion here'\n"
        "Unannotated Olivia (Judge): 'Opinion about unannotated sections here'\n"
        "</OpinionsPerJudge>\n\n"
        "<FinalSummary>\n"
        "The summary about the essay and performance of the student.\n"
        "</FinalSummary>"
    )

    return task_content

def main():
    Essays = Dataset()

    position_template, position_model = create_judge_template(position_persona, position_example_feedback, position_criteria)
    claim_template, claim_model = create_judge_template(claim_persona, claim_example_feedback, claim_criteria)
    counterclaim_template, counterclaim_model = create_judge_template(counterclaim_persona, counterclaim_example_feedback, counterclaim_criteria)
    rebuttal_template, rebuttal_model = create_judge_template(rebuttal_persona, rebuttal_example_feedback, rebuttal_criteria)
    evidence_template, evidence_model = create_judge_template(evidence_persona, evidence_example_feedback, evidence_criteria)
    summary_template, summary_model = create_judge_template(concluding_summary_persona, concluding_summary_example_feedback, concluding_summary_criteria)
    unannotated_template, unannotated_model = create_judge_template(unannotated_persona, unannotated_example_feedback, unannotated_criteria)

    position_agent = ChatAgent(BaseMessage.make_assistant_message("", ""), model=position_model)
    claim_agent = ChatAgent(BaseMessage.make_assistant_message("", ""), model=claim_model)
    counterclaim_agent = ChatAgent(BaseMessage.make_assistant_message("", ""), model=counterclaim_model)
    rebuttal_agent = ChatAgent(BaseMessage.make_assistant_message("", ""), model=rebuttal_model)
    evidence_agent = ChatAgent(BaseMessage.make_assistant_message("", ""), model=evidence_model)
    summary_agent = ChatAgent(BaseMessage.make_assistant_message("", ""), model=summary_model)
    unannotated_agent = ChatAgent(BaseMessage.make_assistant_message("", ""), model=unannotated_model)
#     model = {
#     "model_platform": ModelPlatformType.OLLAMA,
#     "model_type": "llama3",  
#     "url": "http://localhost:11434/v1",  
#     "model_config_dict": {
#         "temperature": 0.7, 
#         "tools": [],  
#     }
# }
    coordinator_agent_kwargs = {
        "model": ModelFactory.create(
            model_platform=ModelPlatformType.OLLAMA,
            model_type="llama3",
            url="http://localhost:11434/v1",
            model_config_dict={"temperature": 0, "tools": None}
        ),  
    }

    task_agent_kwargs = {
        "model": ModelFactory.create(
            model_platform=ModelPlatformType.OLLAMA,
            model_type="llama3",
            url="http://localhost:11434/v1",
            model_config_dict={"temperature": 0, "tools": None}
        ),  
    }

    workforce = Workforce(
        description="Essay Competition",
        coordinator_agent_kwargs=coordinator_agent_kwargs,
        task_agent_kwargs=task_agent_kwargs
        )

    workforce.add_single_agent_worker('Position Paula (Judge)', worker=position_agent)
    workforce.add_single_agent_worker('Claim Clara (Judge)', worker=claim_agent)
    workforce.add_single_agent_worker('Counterclaim Carl (Judge)', worker=counterclaim_agent)
    workforce.add_single_agent_worker('Rebuttal Robert (Judge)', worker=rebuttal_agent)
    workforce.add_single_agent_worker('Evidence Eva (Judge)', worker=evidence_agent)
    workforce.add_single_agent_worker('Summary Susan (Judge)', worker=summary_agent)
    workforce.add_single_agent_worker('Organizer Olivia (Helper)', worker=unannotated_agent)

    for essay_batch in Essays:
        for essay in essay_batch:
            full_text = essay.full_text

            position_discourse, position_other = extract_discourses(essay, 'Position')
            claim_discourse, claim_other = extract_discourses(essay, 'Claim')
            counterclaim_discourse, counterclaim_other = extract_discourses(essay, 'Counterclaim')
            rebuttal_discourse, rebuttal_other = extract_discourses(essay, 'Rebuttal')
            evidence_discourse, evidence_other = extract_discourses(essay, 'Evidence')
            summary_discourse, summary_other = extract_discourses(essay, 'Concluding Statement')
            unannotated_discourse, unannotated_other = extract_discourses(essay, 'Unannotated')

            update_judge(position_agent, position_template, full_text, position_discourse, position_other)
            update_judge(claim_agent, claim_template, full_text, claim_discourse, claim_other)
            update_judge(counterclaim_agent, counterclaim_template, full_text, counterclaim_discourse, counterclaim_other)
            update_judge(rebuttal_agent, rebuttal_template, full_text, rebuttal_discourse, rebuttal_other)
            update_judge(evidence_agent, evidence_template, full_text, evidence_discourse, evidence_other)
            update_judge(summary_agent, summary_template, full_text, summary_discourse, summary_other)
            update_judge(unannotated_agent, unannotated_template, full_text, unannotated_discourse, unannotated_other)

            task = Task(content=generate_task_content(essay), additional_info=full_text, id='0')
            result = workforce.process_task(task)
            print(result)

if __name__ == '__main__':
    main()

Command lines:

python filename.py

Extra dependencies:

Data loader and persona files

Steps to reproduce:

1. 2. 3.

Traceback

Exception has occurred: AttributeError
'str' object has no attribute 'supports_tool_calling'
  File "/media/adnan/New Volume/Abdullah/camel/camel/agents/chat_agent.py", line 511, in step
    and self.model_type.supports_tool_calling
  File "/media/adnan/New Volume/Abdullah/camel/camel/workforce/workforce.py", line 289, in _find_assignee
    response = self.coordinator_agent.step(
  File "/media/adnan/New Volume/Abdullah/camel/camel/workforce/workforce.py", line 408, in _post_ready_tasks
    assignee_id = self._find_assignee(task=ready_task)
  File "/media/adnan/New Volume/Abdullah/camel/camel/workforce/workforce.py", line 443, in _listen_to_channel
    await self._post_ready_tasks()
  File "/media/adnan/New Volume/Abdullah/camel/camel/workforce/workforce.py", line 475, in start
    await self._listen_to_channel()
  File "/media/adnan/New Volume/Abdullah/camel/camel/workforce/workforce.py", line 153, in process_task
    asyncio.run(self.start())
  File "/media/adnan/New Volume/Abdullah/camel/camel/workforce/utils.py", line 63, in wrapper
    return func(self, *args, **kwargs)
  File "/media/adnan/New Volume/Abdullah/MultiAgentLLMs/essay_judge.py", line 183, in main
    result = workforce.process_task(task)
  File "/media/adnan/New Volume/Abdullah/MultiAgentLLMs/essay_judge.py", line 187, in <module>
    main()
AttributeError: 'str' object has no attribute 'supports_tool_calling'

Expected behavior

What I wanted was for it to work fine as it is mentioned in the installation and a quick start guide on creating a model using Ollama. But it is throwing an error on this string. As mentioned in the Installation guide model_type is a string 'llama3' for the llama model with ModelPlatformType.OLLAMA. But on the execution of the code with Workforce it is throwing an error that str has no attribute 'supports_tool_calling' which a string shouldn't be by default. I am on a deadline and would appreciate it if anyone could help me in this regard.

Thanks!

Additional context

I am using the persuade2.0 dataset and different personas for each agent worker and can provide these scripts if required for further information. The code is a bit messy and in-progress...

Wendong-Fan commented 2 days ago

@AbdullahMushtaq78 thanks for raising this issue! @WHALEEYE will look into this

AbdullahMushtaq78 commented 9 hours ago

@Wendong-Fan @WHALEEYE Any update guys?

WHALEEYE commented 6 hours ago

@AbdullahMushtaq78 Sorry about the latency. This bug comes from the design of our current ModelType, which doesn't provide much support to open-source models. For now, we are refactoring the ModelType but it still needs some time to be completed. I'll ask @Wendong-Fan to have a hotfix targeting the model you are using, and at the same time you can try using models offered by OpenAI to avoid running into this bug.