Open guiding opened 1 week ago
Hey @guiding !
What is aide-gpt-4o-mini
?
The error you're getting looks like you're trying to use a model that doesn't exist. Here's a list of all valid Azure models: https://docs.litellm.ai/docs/providers
Hey @guiding !
What is
aide-gpt-4o-mini
?The error you're getting looks like you're trying to use a model that doesn't exist. Here's a list of all valid Azure models: https://docs.litellm.ai/docs/providers
It is an internal proxy endpoint model name base on azure openai, only used in our company, and served in azure cloud.
Description
I use azure openai with customized endpoint, model name. And the Agent will fail at CrewAgentExecutorMixin._create_long_term_memory.
Steps to Reproduce
1.Use customized azure openai endpoint and module name. azure_llm = LLM( model=MODEL_NAME, base_url=CHAT_API, api_key=API_KEY, api_version=API_VERSION, extra_headers={xxxxxx} ) 2.Enable memory in Crew tech_crew = Crew( agents=[researcher], tasks=[research_task], memory=True, process=Process.sequential, # Tasks will be executed one after the other ) 3.enable litellm log by setting it's set_verbose=True
Expected behavior
Agent can run without any error.
Screenshots/Code snippets
Python310\Lib\site-packages\crewai\utilities internal_instructor.py
the to_pydantic function calling litellme without passing full parameters, and cause customized llm param miss.
def to_pydantic(self): messages = [{"role": "user", "content": self.content}] if self.instructions: messages.append({"role": "system", "content": self.instructions})
Operating System
Windows 10
Python Version
3.10
crewAI Version
0.76.2
crewAI Tools Version
0.13.2
Virtual Environment
Venv
Evidence
error log:
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Provider List: https://docs.litellm.ai/docs/providers
15:51:15 - LiteLLM:DEBUG: utils.py:4328 - Error occurred in getting api base - litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=aide-gpt-4o-mini Pass model as E.g. For 'Huggingface' inference endpoints pass in
completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providers DEBUG:LiteLLM:Error occurred in getting api base - litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=aide-gpt-4o-mini Pass model as E.g. For 'Huggingface' inference endpoints pass incompletion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providersPossible Solution
Suggested solution: Follow llm.py, set detail params while calling llm
def to_pydantic(self): messages = [{"role": "user", "content": self.content}] if self.instructions: messages.append({"role": "system", "content": self.instructions})
Additional context
No.