Closed nikkag closed 1 week ago
I've noticed that this same error happens for other non-OpenAI providers such as AzureOpenAI and Ollama as well. Adding openai_api_key
to the parameters of init()
of these providers seems to resolve the issue.
Hey @nikkag , just merged some fixes that might resolve this issue. Can you please pull latest and upgrade the gpt-researcher pip package? Lmk if you still encounter this issue
Hey @assafelovic, thanks for addressing this! However it seems like PR #681 introduced a new issue by removing the header
arg from the self.retriever(sub_query)
call, without making headers an optional arg in TavilySearch
.
File "/Users/nikkag/.virtualenvs/spa/lib/python3.11/site-packages/gpt_researcher/master/agent.py", line 338, in __scrape_data_by_query
retriever = self.retriever(sub_query)
^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: TavilySearch.__init__() missing 1 required positional argument: 'headers'
@assafelovic Just made PR #685 to address this issue.
It seems that this commit (5 days ago) add openai_api_key
to every LLM provider, which causes this bug.
- provider = get_llm(llm_provider, model=model, temperature=temperature, max_tokens=max_tokens, **llm_kwargs)
+ provider = get_llm(llm_provider, model=model, temperature=temperature, max_tokens=max_tokens, openai_api_key=openai_api_key, **(llm_kwargs or {}))
The assumption that every LLM needs an api_key
is questionable, let alone openai_api_key
.
This commit tried to fix it for Ollama and Ollama ONLY.
Thanks for the heads up @nikkag @leitdeux @gdlmx
Some context: The reason for adding the openai_api_key is the argument was to add support for passing the open_ai_api key directly via an API Request Header (self.headers.get("openai_api_key")
) instead of only relying on the OS environment variable.
It seems this was resolved by reverting back that feature in this PR
@gdlmx, are you still getting the error after cloning master or are you running via the pip package?
@assafelovic have we updated the pip_package with the PR fix above? (same bug report here from an hour ago)
cc: @resvis
@ElishaKay just updated the pip package.
Since updating the
get-researcher
package to 0.8.0, I'm getting the following exception when trying to run the researcher with Anthropic models:I did a quick look at it seems like
openai_api_key
was added as an argument tocreate_chat_completion
and is used to instantiate the llm regardless of provider.