assafelovic / gpt-researcher

LLM based autonomous agent that does online comprehensive research on any given topic
https://gptr.dev
Apache License 2.0
13.72k stars 1.76k forks source link

gpt-researcher 0.8.0 pip package no longer works with anthropic models #678

Closed nikkag closed 1 week ago

nikkag commented 1 month ago

Since updating the get-researcher package to 0.8.0, I'm getting the following exception when trying to run the researcher with Anthropic models:

Traceback (most recent call last):
  File "/Users/nikkag/.virtualenvs/spa/lib/python3.11/site-packages/gpt_researcher/master/actions.py", line 91, in choose_agent
    response = await create_chat_completion(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/nikkag/.virtualenvs/spa/lib/python3.11/site-packages/gpt_researcher/utils/llm.py", line 88, in create_chat_completion
    provider = get_llm(llm_provider, model=model, temperature=temperature, max_tokens=max_tokens, openai_api_key=openai_api_key, **(llm_kwargs or {}))
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/nikkag/.virtualenvs/spa/lib/python3.11/site-packages/gpt_researcher/utils/llm.py", line 51, in get_llm
    return llm_provider(**kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^
TypeError: AnthropicProvider.__init__() got an unexpected keyword argument 'openai_api_key'

I did a quick look at it seems like openai_api_key was added as an argument to create_chat_completion and is used to instantiate the llm regardless of provider.

leitdeux commented 1 month ago

I've noticed that this same error happens for other non-OpenAI providers such as AzureOpenAI and Ollama as well. Adding openai_api_key to the parameters of init() of these providers seems to resolve the issue.

assafelovic commented 1 month ago

Hey @nikkag , just merged some fixes that might resolve this issue. Can you please pull latest and upgrade the gpt-researcher pip package? Lmk if you still encounter this issue

nikkag commented 1 month ago

Hey @assafelovic, thanks for addressing this! However it seems like PR #681 introduced a new issue by removing the header arg from the self.retriever(sub_query) call, without making headers an optional arg in TavilySearch.

File "/Users/nikkag/.virtualenvs/spa/lib/python3.11/site-packages/gpt_researcher/master/agent.py", line 338, in __scrape_data_by_query
    retriever = self.retriever(sub_query)
                ^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: TavilySearch.__init__() missing 1 required positional argument: 'headers'
nikkag commented 1 month ago

@assafelovic Just made PR #685 to address this issue.

gdlmx commented 1 month ago

It seems that this commit (5 days ago) add openai_api_key to every LLM provider, which causes this bug.

-     provider = get_llm(llm_provider, model=model, temperature=temperature, max_tokens=max_tokens, **llm_kwargs)
+     provider = get_llm(llm_provider, model=model, temperature=temperature, max_tokens=max_tokens, openai_api_key=openai_api_key, **(llm_kwargs or {}))

The assumption that every LLM needs an api_key is questionable, let alone openai_api_key.

This commit tried to fix it for Ollama and Ollama ONLY.

ElishaKay commented 1 month ago

Thanks for the heads up @nikkag @leitdeux @gdlmx

Some context: The reason for adding the openai_api_key is the argument was to add support for passing the open_ai_api key directly via an API Request Header (self.headers.get("openai_api_key")) instead of only relying on the OS environment variable.

It seems this was resolved by reverting back that feature in this PR

@gdlmx, are you still getting the error after cloning master or are you running via the pip package?

@assafelovic have we updated the pip_package with the PR fix above? (same bug report here from an hour ago)

cc: @resvis

assafelovic commented 1 month ago

@ElishaKay just updated the pip package.