Closed proto1994 closed 1 month ago
Hi @proto1994
Can you try this ?
llm = LLM(
temperature=0.0,
base_url=os.environ["OPENAI_API_BASE"],
model="openai/455-gpt-4o__2024-05-13",
api_key=os.environ["OPENAI_API_KEY"],
)
I am using this as a reference.
@punitchauhan771 It works!!! Thank you very much
Description
Hi, I encountered some problems when I used the following openai way
Steps to Reproduce
Run the following code
Expected behavior
Correct output
Screenshots/Code snippets
llm = LLM( temperature=0.0, base_url=os.environ["OPENAI_API_BASE"], model="455-gpt-4o__2024-05-13", api_key=os.environ["OPENAI_API_KEY"], )
research_agent = Agent( role='Researcher', goal='Find and summarize the latest AI news', backstory="""You're a researcher at a large company. You're responsible for analyzing data and providing insights to the business.""", llm=llm, verbose=True )
to perform a semantic search for a specified query from a text's content across the internet
search_tool = SerperDevTool()
task = Task( description='Find and summarize the latest AI news', expected_output='A bullet list summary of the top 5 most important AI news', agent=research_agent, tools=[search_tool] )
crew = Crew( agents=[research_agent], tasks=[task], verbose=True )
result = crew.kickoff() print(result)
Operating System
Ubuntu 20.04
Python Version
3.12
crewAI Version
0.70.1
crewAI Tools Version
0.12.1
Virtual Environment
Venv
Evidence
Provider List: https://docs.litellm.ai/docs/providers
2024-10-16 21:40:27,109 - 8221871104 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type 'NoneType' is not iterable
Provider List: https://docs.litellm.ai/docs/providers
2024-10-16 21:40:27,121 - 8221871104 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type 'NoneType' is not iterable
Provider List: https://docs.litellm.ai/docs/providers
2024-10-16 21:40:27,123 - 8221871104 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type 'NoneType' is not iterable
Provider List: https://docs.litellm.ai/docs/providers
2024-10-16 21:40:27,125 - 8221871104 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type 'NoneType' is not iterable
Agent: Researcher
Task: Find and summarize the latest AI news
2024-10-16 21:40:27,139 - 8221871104 - llm.py-llm:161 - ERROR: LiteLLM call failed: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=455-gpt-4o__2024-05-13 Pass model as E.g. For 'Huggingface' inference endpoints pass in
completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providersProvider List: https://docs.litellm.ai/docs/providers
2024-10-16 21:40:27,140 - 8221871104 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type 'NoneType' is not iterable
Agent: Researcher
Task: Find and summarize the latest AI news
2024-10-16 21:40:27,146 - 8221871104 - llm.py-llm:161 - ERROR: LiteLLM call failed: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=455-gpt-4o__2024-05-13 Pass model as E.g. For 'Huggingface' inference endpoints pass in
completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providersProvider List: https://docs.litellm.ai/docs/providers
2024-10-16 21:40:27,147 - 8221871104 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type 'NoneType' is not iterable
Agent: Researcher
Task: Find and summarize the latest AI news
2024-10-16 21:40:27,154 - 8221871104 - llm.py-llm:161 - ERROR: LiteLLM call failed: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=455-gpt-4o2024-05-13 Pass model as E.g. For 'Huggingface' inference endpoints pass in
completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providers Traceback (most recent call last): File "/Users/proto/flight/roamrank/search-hot-agent/venv/lib/python3.12/site-packages/crewai/agent.py", line 228, in execute_task result = self.agent_executor.invoke( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/proto/flight/roamrank/search-hot-agent/venv/lib/python3.12/site-packages/crewai/agents/crew_agent_executor.py", line 92, in invoke formatted_answer = self._invoke_loop() ^^^^^^^^^^^^^^^^^^^ File "/Users/proto/flight/roamrank/search-hot-agent/venv/lib/python3.12/site-packages/crewai/agents/crew_agent_executor.py", line 173, in _invoke_loop raise e File "/Users/proto/flight/roamrank/search-hot-agent/venv/lib/python3.12/site-packages/crewai/agents/crew_agent_executor.py", line 113, in _invoke_loop answer = self.llm.call( ^^^^^^^^^^^^^^ File "/Users/proto/flight/roamrank/search-hot-agent/venv/lib/python3.12/site-packages/crewai/llm.py", line 155, in call response = litellm.completion(*params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/proto/flight/roamrank/search-hot-agent/venv/lib/python3.12/site-packages/litellm/utils.py", line 1006, in wrapper raise e File "/Users/proto/flight/roamrank/search-hot-agent/venv/lib/python3.12/site-packages/litellm/utils.py", line 896, in wrapper result = original_function(args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/proto/flight/roamrank/search-hot-agent/venv/lib/python3.12/site-packages/litellm/main.py", line 2959, in completion raise exception_type( File "/Users/proto/flight/roamrank/search-hot-agent/venv/lib/python3.12/site-packages/litellm/main.py", line 858, in completion model, custom_llm_provider, dynamic_api_key, api_base = get_llm_provider( ^^^^^^^^^^^^^^^^^ File "/Users/proto/flight/roamrank/search-hot-agent/venv/lib/python3.12/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 520, in get_llm_provider raise e File "/Users/proto/flight/roamrank/search-hot-agent/venv/lib/python3.12/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 497, in get_llm_provider raise litellm.exceptions.BadRequestError( # type: ignore litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=455-gpt-4o2024-05-13 Pass model as E.g. For 'Huggingface' inference endpoints pass incompletion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providersDuring handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/Users/proto/flight/roamrank/search-hot-agent/venv/lib/python3.12/site-packages/crewai/agent.py", line 228, in execute_task result = self.agent_executor.invoke( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/proto/flight/roamrank/search-hot-agent/venv/lib/python3.12/site-packages/crewai/agents/crew_agent_executor.py", line 92, in invoke formatted_answer = self._invoke_loop() ^^^^^^^^^^^^^^^^^^^ File "/Users/proto/flight/roamrank/search-hot-agent/venv/lib/python3.12/site-packages/crewai/agents/crew_agent_executor.py", line 173, in _invoke_loop raise e File "/Users/proto/flight/roamrank/search-hot-agent/venv/lib/python3.12/site-packages/crewai/agents/crew_agent_executor.py", line 113, in _invoke_loop answer = self.llm.call( ^^^^^^^^^^^^^^ File "/Users/proto/flight/roamrank/search-hot-agent/venv/lib/python3.12/site-packages/crewai/llm.py", line 155, in call response = litellm.completion(*params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/proto/flight/roamrank/search-hot-agent/venv/lib/python3.12/site-packages/litellm/utils.py", line 1006, in wrapper raise e File "/Users/proto/flight/roamrank/search-hot-agent/venv/lib/python3.12/site-packages/litellm/utils.py", line 896, in wrapper result = original_function(args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/proto/flight/roamrank/search-hot-agent/venv/lib/python3.12/site-packages/litellm/main.py", line 2959, in completion raise exception_type( File "/Users/proto/flight/roamrank/search-hot-agent/venv/lib/python3.12/site-packages/litellm/main.py", line 858, in completion model, custom_llm_provider, dynamic_api_key, api_base = get_llm_provider( ^^^^^^^^^^^^^^^^^ File "/Users/proto/flight/roamrank/search-hot-agent/venv/lib/python3.12/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 520, in get_llm_provider raise e File "/Users/proto/flight/roamrank/search-hot-agent/venv/lib/python3.12/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 497, in get_llm_provider raise litellm.exceptions.BadRequestError( # type: ignore litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=455-gpt-4o__2024-05-13 Pass model as E.g. For 'Huggingface' inference endpoints pass in
completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providersDuring handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/Users/proto/flight/roamrank/search-hot-agent/app.py", line 56, in
result = crew.kickoff()
^^^^^^^^^^^^^^
File "/Users/proto/flight/roamrank/search-hot-agent/venv/lib/python3.12/site-packages/crewai/crew.py", line 490, in kickoff
result = self._run_sequential_process()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/proto/flight/roamrank/search-hot-agent/venv/lib/python3.12/site-packages/crewai/crew.py", line 594, in _run_sequential_process
return self._execute_tasks(self.tasks)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/proto/flight/roamrank/search-hot-agent/venv/lib/python3.12/site-packages/crewai/crew.py", line 692, in _execute_tasks
task_output = task.execute_sync(
^^^^^^^^^^^^^^^^^^
File "/Users/proto/flight/roamrank/search-hot-agent/venv/lib/python3.12/site-packages/crewai/task.py", line 191, in execute_sync
return self._execute_core(agent, context, tools)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/proto/flight/roamrank/search-hot-agent/venv/lib/python3.12/site-packages/crewai/task.py", line 247, in _execute_core
result = agent.execute_task(
^^^^^^^^^^^^^^^^^^^
File "/Users/proto/flight/roamrank/search-hot-agent/venv/lib/python3.12/site-packages/crewai/agent.py", line 240, in execute_task
result = self.execute_task(task, context, tools)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/proto/flight/roamrank/search-hot-agent/venv/lib/python3.12/site-packages/crewai/agent.py", line 240, in execute_task
result = self.execute_task(task, context, tools)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/proto/flight/roamrank/search-hot-agent/venv/lib/python3.12/site-packages/crewai/agent.py", line 239, in execute_task
raise e
File "/Users/proto/flight/roamrank/search-hot-agent/venv/lib/python3.12/site-packages/crewai/agent.py", line 228, in execute_task
result = self.agent_executor.invoke(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/proto/flight/roamrank/search-hot-agent/venv/lib/python3.12/site-packages/crewai/agents/crew_agent_executor.py", line 92, in invoke
formatted_answer = self._invoke_loop()
^^^^^^^^^^^^^^^^^^^
File "/Users/proto/flight/roamrank/search-hot-agent/venv/lib/python3.12/site-packages/crewai/agents/crew_agent_executor.py", line 173, in _invoke_loop
raise e
File "/Users/proto/flight/roamrank/search-hot-agent/venv/lib/python3.12/site-packages/crewai/agents/crew_agent_executor.py", line 113, in _invoke_loop
answer = self.llm.call(
^^^^^^^^^^^^^^
File "/Users/proto/flight/roamrank/search-hot-agent/venv/lib/python3.12/site-packages/crewai/llm.py", line 155, in call
response = litellm.completion(*params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/proto/flight/roamrank/search-hot-agent/venv/lib/python3.12/site-packages/litellm/utils.py", line 1006, in wrapper
raise e
File "/Users/proto/flight/roamrank/search-hot-agent/venv/lib/python3.12/site-packages/litellm/utils.py", line 896, in wrapper
result = original_function(args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/proto/flight/roamrank/search-hot-agent/venv/lib/python3.12/site-packages/litellm/main.py", line 2959, in completion
raise exception_type(
File "/Users/proto/flight/roamrank/search-hot-agent/venv/lib/python3.12/site-packages/litellm/main.py", line 858, in completion
model, custom_llm_provider, dynamic_api_key, api_base = get_llm_provider(
^^^^^^^^^^^^^^^^^
File "/Users/proto/flight/roamrank/search-hot-agent/venv/lib/python3.12/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 520, in get_llm_provider
raise e
File "/Users/proto/flight/roamrank/search-hot-agent/venv/lib/python3.12/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 497, in get_llm_provider
raise litellm.exceptions.BadRequestError( # type: ignore
litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=455-gpt-4o__2024-05-13
Pass model as E.g. For 'Huggingface' inference endpoints pass in
completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providersPossible Solution
None
Additional context
None