crewAIInc / crewAI

Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
https://crewai.com
MIT License
18.54k stars 2.55k forks source link

Can't use a local model in Crew #657

Closed ABR-X closed 2 days ago

ABR-X commented 3 months ago

Hi, I want to use Phi3 model with the Crew, but it requires an Open AI key.

crew = Crew(

    agents=[agent],
    tasks=[task],
    process=Process.hierarchical,
    manager_llm=ChatOllama(model="phi3"),
)

result = crew.kickoff()
print(result)

The Error:


ValidationError Traceback (most recent call last) ~\AppData\Local\Temp\ipykernel_14684\2191676005.py in <cell line: 9>() 7 ) 8 ----> 9 result = crew.kickoff() 10 print(result)

~\miniconda3\envs\webscraping\lib\site-packages\crewai\crew.py in kickoff(self) 145 return self._run_sequential_process() 146 if self.process == Process.hierarchical: --> 147 return self._run_hierarchical_process() 148 149 raise NotImplementedError(

~\miniconda3\envs\webscraping\lib\site-packages\crewai\crew.py in _run_hierarchical_process(self) 181 182 i18n = I18N(language=self.language) --> 183 manager = Agent( 184 role=i18n.retrieve("hierarchical_manager_agent", "role"), 185 goal=i18n.retrieve("hierarchical_manager_agent", "goal"),

[... skipping hidden 1 frame]

~\miniconda3\envs\webscraping\lib\site-packages\crewai\agent.py in () 93 i18n: I18N = Field(default=I18N(), description="Internationalization settings.") 94 llm: Any = Field( ---> 95 default_factory=lambda: ChatOpenAI( 96 model="gpt-4", 97 ),

~\miniconda3\envs\webscraping\lib\site-packages\langchain_core\load\serializable.py in init(self, kwargs) 105 106 def init(self, kwargs: Any) -> None: --> 107 super().init(**kwargs) 108 self._lc_kwargs = kwargs 109

~\miniconda3\envs\webscraping\lib\site-packages\pydantic\v1\main.py in init(pydantic_self__, **data) 339 values, fields_set, validation_error = validate_model(pydantic_self.class, data) 340 if validation_error: --> 341 raise validation_error 342 try: 343 object_setattr(__pydantic_self, 'dict', values)

ValidationError: 1 validation error for ChatOpenAI root Did not find openai_api_key, please add an environment variable OPENAI_API_KEY which contains it, or pass openai_api_key as a named parameter. (type=value_error)

gadgethome commented 3 months ago

Hi phi3 does work with ollama. You can set up a dummy key in the code or .env. If you have access to the crewai discord, there are some examples on there. Thanks

ABR-X commented 3 months ago

Hi phi3 does work with ollama. You can set up a dummy key in the code or .env. If you have access to the crewai discord, there are some examples on there. Thanks

@gadgethome Hi, thanks for your responce, I don't have access to the discord channel. I did create a .env file and set a dummy OPENAI_API_KEY key, but there is an issue:

Incorrect API key provided

gadgethome commented 3 months ago

You can do this in a few different ways. I'm using ollama but should work with lmstudio as well.

os.environ["OPENAI_API_BASE"]='http://localhost:11434/v1' os.environ["OPENAI_MODEL_NAME"]='openhermes' os.environ["OPENAI_API_KEY"]='ollama'

or

llm = ChatOpenAI( model="phi3",

base_url="http://localhost:11434/v1",

        api_key="ollama",  # something random
        temperature=0,

)

ABR-X commented 3 months ago

Hi, I tried this as well, but it shows this error:

{'error': {'message': "model 'gpt-4' not found, try pulling it first", 'type': 'api_error', 'param': None, 'code': None}}

I don't know why it is still calling the OPENAI model! because I did specify that I want to use Phi3 model. when I create agents I don't find this problem:

agent= Agent(
    role='Maths Specialist',
    goal='Perform multiplication and addition operations on numbers',
    llm=ChatOllama(model="phi3"),
    tools=Math_tools.tools(), # array with tools
    backstory="""As a Math Expert, your mission is to provide the result of
        multiplication and addition operations. The results that you
        provide will help resolving math problems accurately.""",
    verbose=True
  )

The issue occurs only when I try to define the manager_llm when creating a Crew.

JamesStallings commented 1 month ago

You can do this in a few different ways. I'm using ollama but should work with lmstudio as well.

os.environ["OPENAI_API_BASE"]='http://localhost:11434/v1' os.environ["OPENAI_MODEL_NAME"]='openhermes' os.environ["OPENAI_API_KEY"]='ollama'

or

llm = ChatOpenAI( model="phi3", #base_url="http://localhost:11434/v1", api_key="ollama", # something random temperature=0, )

THANKS FOR THIS It got me out of the weeds.

Cheers!

github-actions[bot] commented 1 week ago

This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

github-actions[bot] commented 2 days ago

This issue was closed because it has been stalled for 5 days with no activity.