crewAIInc / crewAI

Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
https://crewai.com
MIT License
19.72k stars 2.73k forks source link

[BUG] Configuration of llm variable for agent doesn't work #1356

Closed racso-dev closed 6 days ago

racso-dev commented 1 week ago

Description

I'm getting the following error even though I explicitly specified that my agents should use gpt-4o-mini which is the actually the default, but apparently something is broken.

openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 8192 tokens, however you requested 26827 tokens (26827 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.", 'type': 'invalid_request_error', 'param': None, 'code': None}}

Steps to Reproduce

  1. crewai create crew some_crew
  2. pass some variable that makes it exceed a context length of 8192 tokens
  3. run the crew and you should get the error from openai telling you that you're using a model that a context lenght of 8192 tokens

Expected behavior

When specifying llm parameter to agents it should use it!

Screenshots/Code snippets

from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from langchain_openai import ChatOpenAI

@CrewBase
class SomeCrew():
    """Some crew"""
    agents_config = 'config/agents.yaml'
    tasks_config = 'config/tasks.yaml'

    @agent
    def writer(self) -> Agent:
        return Agent(
            config=self.agents_config['writer'],
            verbose=True,
            llm=ChatOpenAI(temperature=0.7, model="gpt-4o-mini"),
        )

    @task
    def writing_task(self) -> Task:
        return Task(
            config=self.tasks_config['writing_task'],
        )

    @crew
    def crew(self) -> Crew:
        """Creates the Autoseo crew"""
        return Crew(
            agents=self.agents,
            tasks=self.tasks,
            process=Process.sequential,
            verbose=True,
            memory=True,
            output_log_file='crew.log',
        )

Operating System

Ubuntu 24.04

Python Version

3.12

crewAI Version

0.63.6

crewAI Tools Version

0.63.6

Virtual Environment

Venv

Evidence

openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 8192 tokens, however you requested 26827 tokens (26827 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.", 'type': 'invalid_request_error', 'param': None, 'code': None}}

Possible Solution

Assuming It's related to your recent migration to LiteLLM

Additional context

When I use the old version of declaring agents, It works fine

joaomdmoura commented 1 week ago

Good catch, looking into it!

joaomdmoura commented 1 week ago

Trying to replicate this, one thing that I realized is that 4o-mini has 128k window, so the error seems odd. will dig deeper

joaomdmoura commented 6 days ago

Version 0.64.0 is out and fixes this :D Let me know if it still an issue but I was able to replicate it and fix it

racso-dev commented 2 days ago

It now seems to be working fine indeed, thks!