The original AzureOpenAIProvier class has the deployment name arguments, but in high level llm calls (get_llm) is using the model as interface name.
This needs to be changed, aka AzureOpenAIProvider should have the model argument instead of
File "/home/xpe1szh/miniconda3/envs/gpt-research/lib/python3.10/site-packages/fastapi/routing.py", line 348, in app
await dependant.call(**values)
File "/home/xpe1szh/repo-gh/gpt-researcher/backend/server.py", line 53, in websocket_endpoint
report = await manager.start_streaming(task, report_type, report_source, websocket)
File "/home/xpe1szh/repo-gh/gpt-researcher/backend/websocket_manager.py", line 57, in start_streaming
report = await run_agent(task, report_type, report_source, websocket)
File "/home/xpe1szh/repo-gh/gpt-researcher/backend/websocket_manager.py", line 75, in run_agent
report = await researcher.run()
File "/home/xpe1szh/repo-gh/gpt-researcher/backend/report_type/basic_report/basic_report.py", line 18, in run
await researcher.conduct_research()
File "/home/xpe1szh/repo-gh/gpt-researcher/gpt_researcher/master/agent.py", line 96, in conduct_research
context = await self.__get_context_by_search(self.query)
File "/home/xpe1szh/repo-gh/gpt-researcher/gpt_researcher/master/agent.py", line 177, in __get_context_by_search
sub_queries = await get_sub_queries(query=query, agent_role_prompt=self.role,
File "/home/xpe1szh/repo-gh/gpt-researcher/gpt_researcher/master/actions.py", line 100, in get_sub_queries
response = await create_chat_completion(
File "/home/xpe1szh/repo-gh/gpt-researcher/gpt_researcher/utils/llm.py", line 88, in create_chat_completion
provider = get_llm(llm_provider, model=model, temperature=temperature, max_tokens=max_tokens, **llm_kwargs)
File "/home/xpe1szh/repo-gh/gpt-researcher/gpt_researcher/utils/llm.py", line 52, in get_llm
return llm_provider(**kwargs)
TypeError: AzureOpenAIProvider.__init__() got an unexpected keyword argument 'model'
INFO: connection closed
^CINFO: Shutting down
INFO: Waiting for application shutdown.
INFO: Application shutdown complete.
INFO: Finished server process [14632]
INFO: Stopping reloader process [14629]
(gpt-research) xpe1szh@SZH-C-004KF:~/repo-gh/gpt-researcher$ python -m uvicorn main:app --reload
INFO: Will watch for changes in these directories: ['/home/xpe1szh/repo-gh/gpt-researcher']
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO: Started reloader process [16462] using WatchFiles
WARNING:root:USER_AGENT environment variable not set, consider setting it to identify your requests.
INFO: Started server process [16464]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: 127.0.0.1:49914 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:49924 - "GET /static/gptr-logo.png HTTP/1.1" 304 Not Modified
INFO: 127.0.0.1:49914 - "GET /site/styles.css HTTP/1.1" 304 Not Modified
INFO: 127.0.0.1:49914 - "GET /site/scripts.js HTTP/1.1" 304 Not Modified
INFO: 127.0.0.1:49914 - "GET /static/favicon.ico HTTP/1.1" 304 Not Modified
INFO: 127.0.0.1:49914 - "GET / HTTP/1.1" 200 OK
INFO: ('127.0.0.1', 55332) - "WebSocket /ws" [accepted]
INFO: connection open
🔎 Starting the research task for 'corner bond in automotive reliability'...
Error choosing agent: AzureOpenAIProvider.__init__() got an unexpected keyword argument 'model'
The original AzureOpenAIProvier class has the deployment name arguments, but in high level llm calls (get_llm) is using the model as interface name. This needs to be changed, aka AzureOpenAIProvider should have the model argument instead of