Open aylitat opened 1 month ago
Have the same issue
crewai = {extras = ["tools"], version = "^0.65.2"}
Any one found work around?
For me worked by setting up the LLM for Olama like this:
return LLM( model=os.getenv('OLLAMA_MODEL_LLAMA_31_8B'), temperature=0.0, base_url=os.getenv('OLLAMA_SERVER_REMOTE'), verbose=True )
OLLAMA_SERVER_LOCAL=http://localhost:11434 OLLAMA_MODEL_LLAMA_31_8B=ollama/llama3.1:latest
based on this issue, looks like you need to prepend the model with openai/
so it believes it is openai api compatible
https://github.com/crewAIInc/crewAI/issues/1456
Description
While I use LLM to creat llm,it show me this error:
litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=moonshot-v1-32k Pass model as E.g. For 'Huggingface' inference endpoints pass in
completion(model='huggingface/starcoder',..)`code:
llm = get_llm( api_key=BASE_CONFIG.MOONSHOT_API_KEY, base_url=BASE_CONFIG.BASE_URL, model=BASE_CONFIG.MODEL )
But I use the custom-model example:
llm = LLM( model="custom-model-name", base_url="https://api.your-provider.com/v1", api_key="your-api-key" ) agent = Agent(llm=llm, ...)
Steps to Reproduce
https://docs.crewai.com/how-to/LLM-Connections/#using-local-models-with-ollama
Expected behavior
pass
Screenshots/Code snippets
llm = LLM( model="custom-model-name", base_url="https://api.your-provider.com/v1", api_key="your-api-key" ) agent = Agent(llm=llm, ...)
Operating System
Windows 10
Python Version
3.10
crewAI Version
lastest
crewAI Tools Version
lastest
Virtual Environment
Venv
Evidence
Possible Solution
None
Additional context
pass