Closed racso-dev closed 6 days ago
Good catch, looking into it!
Trying to replicate this, one thing that I realized is that 4o-mini has 128k window, so the error seems odd. will dig deeper
Version 0.64.0 is out and fixes this :D Let me know if it still an issue but I was able to replicate it and fix it
It now seems to be working fine indeed, thks!
Description
I'm getting the following error even though I explicitly specified that my agents should use
gpt-4o-mini
which is the actually the default, but apparently something is broken.Steps to Reproduce
Expected behavior
When specifying llm parameter to agents it should use it!
Screenshots/Code snippets
Operating System
Ubuntu 24.04
Python Version
3.12
crewAI Version
0.63.6
crewAI Tools Version
0.63.6
Virtual Environment
Venv
Evidence
openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 8192 tokens, however you requested 26827 tokens (26827 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.", 'type': 'invalid_request_error', 'param': None, 'code': None}}
Possible Solution
Assuming It's related to your recent migration to LiteLLM
Additional context
When I use the old version of declaring agents, It works fine