crewAIInc / crewAI

Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
https://crewai.com
MIT License
20.77k stars 2.87k forks source link

[BUG]CrewAgentExecutorMixin._create_long_term_memory fail to do long memory saving while using azure openai as llm #1518

Open guiding opened 1 week ago

guiding commented 1 week ago

Description

I use azure openai with customized endpoint, model name. And the Agent will fail at CrewAgentExecutorMixin._create_long_term_memory.

Steps to Reproduce

1.Use customized azure openai endpoint and module name. azure_llm = LLM( model=MODEL_NAME, base_url=CHAT_API, api_key=API_KEY, api_version=API_VERSION, extra_headers={xxxxxx} ) 2.Enable memory in Crew tech_crew = Crew( agents=[researcher], tasks=[research_task], memory=True, process=Process.sequential, # Tasks will be executed one after the other ) 3.enable litellm log by setting it's set_verbose=True

  1. Call tech_crew.kickoff() and check the result should have no error

Expected behavior

Agent can run without any error.

Screenshots/Code snippets

Python310\Lib\site-packages\crewai\utilities internal_instructor.py

the to_pydantic function calling litellme without passing full parameters, and cause customized llm param miss.

def to_pydantic(self): messages = [{"role": "user", "content": self.content}] if self.instructions: messages.append({"role": "system", "content": self.instructions})

    model = self._client.chat.completions.create(
        model=self.llm.model, response_model=self.model, messages=messages
    )
    return model

Operating System

Windows 10

Python Version

3.10

crewAI Version

0.76.2

crewAI Tools Version

0.13.2

Virtual Environment

Venv

Evidence

error log:

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

Provider List: https://docs.litellm.ai/docs/providers

15:51:15 - LiteLLM:DEBUG: utils.py:4328 - Error occurred in getting api base - litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=aide-gpt-4o-mini Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers DEBUG:LiteLLM:Error occurred in getting api base - litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=aide-gpt-4o-mini Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers

Possible Solution

Suggested solution: Follow llm.py, set detail params while calling llm

def to_pydantic(self): messages = [{"role": "user", "content": self.content}] if self.instructions: messages.append({"role": "system", "content": self.instructions})

    params = {
        "model": self.llm.model,
        "messages": self.llm.messages,
        "timeout": self.llm.timeout,
        "temperature": self.llm.temperature,
        "top_p": self.llm.top_p,
        "n": self.llm.n,
        "stop": self.llm.stop,
        "max_tokens": self.llm.max_tokens or self.max_completion_tokens,
        "presence_penalty": self.llm.presence_penalty,
        "frequency_penalty": self.llm.frequency_penalty,
        "logit_bias": self.llm.logit_bias,
        "response_format": self.llm.response_format,
        "seed": self.llm.seed,
        "logprobs": self.llm.logprobs,
        "top_logprobs": self.llm.top_logprobs,
        "api_base": self.llm.base_url,
        "api_version": self.llm.api_version,
        "api_key": self.llm.api_key,
        "stream": False,
        "response_model":self.model,
        **self.llm.kwargs,
    }
    # Remove None values to avoid passing unnecessary parameters
    params = {k: v for k, v in params.items() if v is not None}
    model = self._client.chat.completions.create(params)

    return model

Additional context

No.

bhancockio commented 6 days ago

Hey @guiding !

What is aide-gpt-4o-mini?

The error you're getting looks like you're trying to use a model that doesn't exist. Here's a list of all valid Azure models: https://docs.litellm.ai/docs/providers

guiding commented 1 day ago

Hey @guiding !

What is aide-gpt-4o-mini?

The error you're getting looks like you're trying to use a model that doesn't exist. Here's a list of all valid Azure models: https://docs.litellm.ai/docs/providers

It is an internal proxy endpoint model name base on azure openai, only used in our company, and served in azure cloud.