assafelovic / gpt-researcher

LLM based autonomous agent that conducts local and web research on any topic and generates a comprehensive report with citations.
https://gptr.dev
Apache License 2.0
14.85k stars 1.99k forks source link

Error on query: TypeError: expected string or bytes-like object, got 'NoneType' #859

Open garylawuk opened 1 month ago

garylawuk commented 1 month ago

Describe the bug Running a query results in

⚠️ Error in reading JSON, attempting to repair JSON
Error using json_repair: the JSON object must be str, bytes or bytearray, not NoneType

in the console and no output in the webUI

To Reproduce Steps to reproduce the behavior:

  1. git clone https://github.com/assafelovic/gpt-researcher.git && cd gpt-researcher
  2. python -m venv venv
  3. source venv/bin/activate
  4. pip install -r requirements.txt
  5. export OPENAI_API_KEY={sk-proj-XXX} export TAVILY_API_KEY={tvly-XXX}
  6. uvicorn main:app --reload
  7. ask for a 'Recipe for delicious cake' in the webUI, selecting 'Detailed' and 'Informative' options

Expected behavior A response with research

Screenshots n/a

Desktop (please complete the following information):

Additional context Add any other context about the problem here.

garylawuk commented 1 month ago

Full traceback:

INFO:     Will watch for changes in these directories: ['/home/glaw/Github/gpt-researcher']
INFO:     Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO:     Started reloader process [12339] using StatReload
/home/glaw/Github/gpt-researcher/multi_agents/agents/utils/utils.py:4: SyntaxWarning: invalid escape sequence '\ '
  """
INFO:     Started server process [12341]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     127.0.0.1:49850 - "GET / HTTP/1.1" 200 OK
INFO:     127.0.0.1:49872 - "GET /site/scripts.js HTTP/1.1" 200 OK
INFO:     127.0.0.1:49856 - "GET /static/gptr-logo.png HTTP/1.1" 200 OK
INFO:     127.0.0.1:49850 - "GET /site/styles.css HTTP/1.1" 200 OK
INFO:     127.0.0.1:49850 - "GET /static/favicon.ico HTTP/1.1" 200 OK
INFO:     ('127.0.0.1', 50934) - "WebSocket /ws" [accepted]
INFO:     connection open
INFO:     [14:43:39] 🔎 Starting the research task for 'recipe for delicious cake '...
⚠️ Error in reading JSON, attempting to repair JSON
Error using json_repair: the JSON object must be str, bytes or bytearray, not NoneType
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/home/glaw/Github/gpt-researcher/gpt_researcher/master/actions.py", line 145, in choose_agent
    response = await create_chat_completion(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/glaw/Github/gpt-researcher/gpt_researcher/utils/llm.py", line 59, in create_chat_completion
    response = await provider.get_chat_response(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/glaw/Github/gpt-researcher/gpt_researcher/llm_provider/generic/base.py", line 96, in get_chat_response
    output = await self.llm.ainvoke(messages)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/glaw/Github/gpt-researcher/venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 298, in ainvoke
    llm_result = await self.agenerate_prompt(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/glaw/Github/gpt-researcher/venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 787, in agenerate_prompt
    return await self.agenerate(
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/glaw/Github/gpt-researcher/venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 747, in agenerate
    raise exceptions[0]
  File "/home/glaw/Github/gpt-researcher/venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 923, in _agenerate_with_cache
    result = await self._agenerate(
             ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/glaw/Github/gpt-researcher/venv/lib/python3.12/site-packages/langchain_openai/chat_models/base.py", line 843, in _agenerate
    response = await self.async_client.create(**payload)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/glaw/Github/gpt-researcher/venv/lib/python3.12/site-packages/openai/resources/chat/completions.py", line 1412, in create
    return await self._post(
           ^^^^^^^^^^^^^^^^^
  File "/home/glaw/Github/gpt-researcher/venv/lib/python3.12/site-packages/openai/_base_client.py", line 1829, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/glaw/Github/gpt-researcher/venv/lib/python3.12/site-packages/openai/_base_client.py", line 1523, in request
    return await self._request(
           ^^^^^^^^^^^^^^^^^^^^
  File "/home/glaw/Github/gpt-researcher/venv/lib/python3.12/site-packages/openai/_base_client.py", line 1624, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: {sk-proj**********************************************BBj}. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/glaw/Github/gpt-researcher/venv/lib/python3.12/site-packages/uvicorn/protocols/websockets/websockets_impl.py", line 244, in run_asgi
    result = await self.app(self.scope, self.asgi_receive, self.asgi_send)  # type: ignore[func-returns-value]
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/glaw/Github/gpt-researcher/venv/lib/python3.12/site-packages/uvicorn/middleware/proxy_headers.py", line 70, in __call__
    return await self.app(scope, receive, send)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/glaw/Github/gpt-researcher/venv/lib/python3.12/site-packages/fastapi/applications.py", line 1054, in __call__
    await super().__call__(scope, receive, send)
  File "/home/glaw/Github/gpt-researcher/venv/lib/python3.12/site-packages/starlette/applications.py", line 113, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/home/glaw/Github/gpt-researcher/venv/lib/python3.12/site-packages/starlette/middleware/errors.py", line 152, in __call__
    await self.app(scope, receive, send)
  File "/home/glaw/Github/gpt-researcher/venv/lib/python3.12/site-packages/starlette/middleware/cors.py", line 77, in __call__
    await self.app(scope, receive, send)
  File "/home/glaw/Github/gpt-researcher/venv/lib/python3.12/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  File "/home/glaw/Github/gpt-researcher/venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 62, in wrapped_app
    raise exc
  File "/home/glaw/Github/gpt-researcher/venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 51, in wrapped_app
    await app(scope, receive, sender)
  File "/home/glaw/Github/gpt-researcher/venv/lib/python3.12/site-packages/starlette/routing.py", line 715, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/home/glaw/Github/gpt-researcher/venv/lib/python3.12/site-packages/starlette/routing.py", line 735, in app
    await route.handle(scope, receive, send)
  File "/home/glaw/Github/gpt-researcher/venv/lib/python3.12/site-packages/starlette/routing.py", line 362, in handle
    await self.app(scope, receive, send)
  File "/home/glaw/Github/gpt-researcher/venv/lib/python3.12/site-packages/starlette/routing.py", line 95, in app
    await wrap_app_handling_exceptions(app, session)(scope, receive, send)
  File "/home/glaw/Github/gpt-researcher/venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 62, in wrapped_app
    raise exc
  File "/home/glaw/Github/gpt-researcher/venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 51, in wrapped_app
    await app(scope, receive, sender)
  File "/home/glaw/Github/gpt-researcher/venv/lib/python3.12/site-packages/starlette/routing.py", line 93, in app
    await func(session)
  File "/home/glaw/Github/gpt-researcher/venv/lib/python3.12/site-packages/fastapi/routing.py", line 383, in app
    await dependant.call(**solved_result.values)
  File "/home/glaw/Github/gpt-researcher/backend/server.py", line 89, in websocket_endpoint
    report = await manager.start_streaming(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/glaw/Github/gpt-researcher/backend/websocket_manager.py", line 60, in start_streaming
    report = await run_agent(task, report_type, report_source, source_urls, tone, websocket, headers)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/glaw/Github/gpt-researcher/backend/websocket_manager.py", line 97, in run_agent
    report = await researcher.run()
             ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/glaw/Github/gpt-researcher/backend/report_type/basic_report/basic_report.py", line 41, in run
    await researcher.conduct_research()
  File "/home/glaw/Github/gpt-researcher/gpt_researcher/master/agent.py", line 116, in conduct_research
    self.agent, self.role = await choose_agent(
                            ^^^^^^^^^^^^^^^^^^^
  File "/home/glaw/Github/gpt-researcher/gpt_researcher/master/actions.py", line 162, in choose_agent
    return await handle_json_error(response)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/glaw/Github/gpt-researcher/gpt_researcher/master/actions.py", line 173, in handle_json_error
    json_string = extract_json_with_regex(response)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/glaw/Github/gpt-researcher/gpt_researcher/master/actions.py", line 189, in extract_json_with_regex
    json_match = re.search(r"{.*?}", response, re.DOTALL)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/re/__init__.py", line 177, in search
    return _compile(pattern, flags).search(string)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: expected string or bytes-like object, got 'NoneType'
INFO:     connection closed
hereiamravi commented 1 month ago

I could see the incorrect API provided , I have observed the same issue while i am experimenting the gpt-researcher with OpenAI API. try to create new API key and set that to OPENAI_API_KEY variable. It seems that OpenAI key format has been changed, its 164 characters length now.

Please note that, I am adding this as a developer, Happy to contribute to gpt-researcher community.

Thanks, Ravi kumar

garylawuk commented 1 month ago

Good shout -- the openai api key is now larger. Unfortunately, using a freshly generated one tickles the same bug:

⚠️ Error in reading JSON, attempting to repair JSON
Error using json_repair: the JSON object must be str, bytes or bytearray, not NoneType
ERROR:    Exception in ASGI application
MaximPro commented 1 month ago

wtf... I have the same issue. so we can't basically use gpt researcher now due to this problem, if we don't have an old API Key? Please fix that asap

garylawuk commented 1 month ago

If I’m not doing anything wrong I’m surprised more people aren’t logging bugs @MaximPro — however, for me, it’s completely reproducible on existing and new installs. I’ve been using gpt researcher for several months, so I’m not inclined to think it’s user error on my part.

ElishaKay commented 1 month ago

Sup guys, It looks like it's failing at the very first LLM API Call of await choose_agent

2 things to try: a) pip install -r multi_agents/requirements.txt b) running via docker - recommended path of least resistance

jawherkh commented 1 month ago

hello , i am experiencing the same issue. this is my first time trying gpt researcher , any fix yet? thanks

ElishaKay commented 1 month ago

Sup team,

We'll need to test the underlying get_llm function separately to narrow down on the issue.

I've consulted with the AI Dev Team & here's what they gave me:

Create a new file in tests/run_llm_example.py, cd into the tests folder & run with python -m run_llm_example

import asyncio
import os
from gpt_researcher.utils.llm import get_llm
from gpt_researcher import GPTResearcher

async def main():
    # Set the API keys
    OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
    TAVILY_API_KEY = os.getenv("TAVILY_API_KEY")

    if not OPENAI_API_KEY or not TAVILY_API_KEY:
        raise ValueError("API keys must be set in the environment variables")

    # Example usage of get_llm function
    llm_provider = "openai"
    model = "gpt-3.5-turbo" 
    temperature = 0.7
    max_tokens = 1000

    llm = get_llm(llm_provider, model=model, temperature=temperature, max_tokens=max_tokens)
    print(f"LLM Provider: {llm_provider}, Model: {model}, Temperature: {temperature}, Max Tokens: {max_tokens}")
    print('llm: ',llm)
    await test_llm(llm=llm)

async def test_llm(llm):
    # Test the connection with a simple query
    messages = [{"role": "user", "content": "sup?"}]
    try:
        response = await llm.get_chat_response(messages, stream=False)
        print("LLM response:", response)
    except Exception as e:
        print(f"Error: {e}")

# Run the async function
asyncio.run(main())

What does the CLI say when you run the above file with python -m run_llm_example?

For consulting with the AI Dev Team, you can join the discord & run the "/ask" command in the help forum

garylawuk commented 1 month ago
 % python3 -m run_llm_example 
USER_AGENT environment variable not set, consider setting it to identify your requests.
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/Users/glaw/Github/gpt-researcher/tests/run_llm_example.py", line 36, in <module>
    asyncio.run(main())
  File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/asyncio/runners.py", line 194, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/asyncio/base_events.py", line 687, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "/Users/glaw/Github/gpt-researcher/tests/run_llm_example.py", line 20, in main
    llm = get_llm(llm_provider, model=model, temperature=temperature, max_tokens=max_tokens, api_key=OPENAI_API_KEY)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/gpt_researcher/utils/llm.py", line 54, in get_llm
    return llm_provider(**kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^
TypeError: OpenAIProvider.__init__() got an unexpected keyword argument 'api_key'
% cat run_llm_example.py 
import asyncio
import os
from gpt_researcher.utils.llm import get_llm
from gpt_researcher import GPTResearcher

async def main():
    # Set the API keys
    OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
    TAVILY_API_KEY = os.getenv("TAVILY_API_KEY")

    if not OPENAI_API_KEY or not TAVILY_API_KEY:
        raise ValueError("API keys must be set in the environment variables")

    # Example usage of get_llm function
    llm_provider = "openai"
    model = "gpt-3.5-turbo" 
    temperature = 0.7
    max_tokens = 1000

    llm = get_llm(llm_provider, model=model, temperature=temperature, max_tokens=max_tokens, api_key=OPENAI_API_KEY)
    print(f"LLM Provider: {llm_provider}, Model: {model}, Temperature: {temperature}, Max Tokens: {max_tokens}")
    print('llm: ',llm)
    await test_llm(llm=llm)

async def test_llm(llm):
    # Test the connection with a simple query
    messages = [{"role": "user", "content": "sup?"}]
    try:
        response = await llm.get_chat_response(messages, stream=False)
        print("LLM response:", response)
    except Exception as e:
        print(f"Error: {e}")

# Run the async function
asyncio.run(main())                                                                                                                                                                                                                      
chuhuiL commented 1 month ago

Put your API key inside the main.py can solve the problem

ElishaKay commented 1 month ago

Almost there @garylawuk,

I removed the api_key=OPENAI_API_KEY parameter from the the get_llm function above. Hopefully that will get you to the root cause if you run again

wxyuwojiusi commented 2 weeks ago

I have the same error, have you solved it?

RobertsBatars commented 2 weeks ago

I have the same issue, but it seems to be working fine using Docker

robertobalestri commented 1 week ago

Any news? I'm having this issue too

robertobalestri commented 1 week ago

Even if starting from docker:

gptr-nextjs-1 | > gpt-researcher@0.1.0 dev gptr-nextjs-1 | > next dev gptr-nextjs-1 | gptr-nextjs-1 | ▲ Next.js 14.2.3 gptr-nextjs-1 | - Local: http://localhost:3000 gptr-nextjs-1 | gptr-nextjs-1 | ✓ Starting... gptr-nextjs-1 | Attention: Next.js now collects completely anonymous telemetry regarding usage. gptr-nextjs-1 | This information is used to shape Next.js' roadmap and prioritize features. gptr-nextjs-1 | You can learn more, including how to opt-out if you'd not like to participate in this anonymous program, by visiting the following URL: gptr-nextjs-1 | https://nextjs.org/telemetry gptr-nextjs-1 | gpt-researcher-1 | INFO: Started server process [1] gpt-researcher-1 | INFO: Waiting for application startup. gpt-researcher-1 | INFO: Application startup complete. gpt-researcher-1 | INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit) gptr-nextjs-1 | ✓ Ready in 2.3s gpt-researcher-1 | INFO: 172.21.0.1:37462 - "GET / HTTP/1.1" 200 OK gpt-researcher-1 | INFO: 172.21.0.1:37462 - "GET /site/styles.css HTTP/1.1" 200 OK gpt-researcher-1 | INFO: 172.21.0.1:37478 - "GET /site/scripts.js HTTP/1.1" 200 OK gpt-researcher-1 | INFO: 172.21.0.1:37486 - "GET /static/gptr-logo.png HTTP/1.1" 200 OK gpt-researcher-1 | INFO: 172.21.0.1:37486 - "GET /static/favicon.ico HTTP/1.1" 200 OK gpt-researcher-1 | INFO: 172.21.0.1:37486 - "GET / HTTP/1.1" 200 OK gpt-researcher-1 | INFO: 172.21.0.1:35692 - "GET / HTTP/1.1" 200 OK gpt-researcher-1 | INFO: 172.21.0.1:35692 - "GET /site/styles.css HTTP/1.1" 200 OK gpt-researcher-1 | INFO: 172.21.0.1:35702 - "GET /static/gptr-logo.png HTTP/1.1" 200 OK gpt-researcher-1 | INFO: 172.21.0.1:35702 - "GET /site/scripts.js HTTP/1.1" 304 Not Modified gpt-researcher-1 | INFO: ('172.21.0.1', 58398) - "WebSocket /ws" [accepted] gpt-researcher-1 | INFO: connection open gpt-researcher-1 | ⚠️ Error in reading JSON, attempting to repair JSON gpt-researcher-1 | Error using json_repair: the JSON object must be str, bytes or bytearray, not NoneType gpt-researcher-1 | None gpt-researcher-1 | ERROR: Exception in ASGI application gpt-researcher-1 | Traceback (most recent call last): gpt-researcher-1 | File "/usr/src/app/gpt_researcher/actions/agent_creator.py", line 27, in choose_agent gpt-researcher-1 | response = await create_chat_completion( gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ gpt-researcher-1 | File "/usr/src/app/gpt_researcher/utils/llm.py", line 54, in create_chat_completion gpt-researcher-1 | provider = get_llm(llm_provider, model=model, temperature=temperature, gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ gpt-researcher-1 | File "/usr/src/app/gpt_researcher/utils/llm.py", line 19, in get_llm gpt-researcher-1 | return GenericLLMProvider.from_provider(llm_provider, **kwargs) gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ gpt-researcher-1 | File "/usr/src/app/gpt_researcher/llm_provider/generic/base.py", line 48, in from_provider gpt-researcher-1 | llm = AzureChatOpenAI(**kwargs) gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^ gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/load/serializable.py", line 125, in __init__ gpt-researcher-1 | super().__init__(*args, **kwargs) gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/pydantic/main.py", line 212, in __init__ gpt-researcher-1 | validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self) gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ gpt-researcher-1 | pydantic_core._pydantic_core.ValidationError: 1 validation error for AzureChatOpenAI gpt-researcher-1 | Value error, Must provide either theapi_versionargument or theOPENAI_API_VERSIONenvironment variable [type=value_error, input_value={'azure_deployment': 'gpt...000, 'model_kwargs': {}}, input_type=dict] gpt-researcher-1 | For further information visit https://errors.pydantic.dev/2.9/v/value_error gpt-researcher-1 | gpt-researcher-1 | During handling of the above exception, another exception occurred: gpt-researcher-1 | gpt-researcher-1 | Traceback (most recent call last): gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/protocols/websockets/websockets_impl.py", line 242, in run_asgi gpt-researcher-1 | result = await self.app(self.scope, self.asgi_receive, self.asgi_send) # type: ignore[func-returns-value] gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in __call__ gpt-researcher-1 | return await self.app(scope, receive, send) gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__ gpt-researcher-1 | await super().__call__(scope, receive, send) gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/starlette/applications.py", line 113, in __call__ gpt-researcher-1 | await self.middleware_stack(scope, receive, send) gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 152, in __call__ gpt-researcher-1 | await self.app(scope, receive, send) gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/cors.py", line 77, in __call__ gpt-researcher-1 | await self.app(scope, receive, send) gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 62, in __call__ gpt-researcher-1 | await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send) gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app gpt-researcher-1 | raise exc gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app gpt-researcher-1 | await app(scope, receive, sender) gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 715, in __call__ gpt-researcher-1 | await self.middleware_stack(scope, receive, send) gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 735, in app gpt-researcher-1 | await route.handle(scope, receive, send) gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 362, in handle gpt-researcher-1 | await self.app(scope, receive, send) gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 95, in app gpt-researcher-1 | await wrap_app_handling_exceptions(app, session)(scope, receive, send) gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app gpt-researcher-1 | raise exc gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app gpt-researcher-1 | await app(scope, receive, sender) gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 93, in app gpt-researcher-1 | await func(session) gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/fastapi/routing.py", line 383, in app gpt-researcher-1 | await dependant.call(**solved_result.values) gpt-researcher-1 | File "/usr/src/app/backend/server/server.py", line 136, in websocket_endpoint gpt-researcher-1 | await handle_websocket_communication(websocket, manager) gpt-researcher-1 | File "/usr/src/app/backend/server/server_utils.py", line 117, in handle_websocket_communication gpt-researcher-1 | await handle_start_command(websocket, data, manager) gpt-researcher-1 | File "/usr/src/app/backend/server/server_utils.py", line 28, in handle_start_command gpt-researcher-1 | report = await manager.start_streaming( gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ gpt-researcher-1 | File "/usr/src/app/backend/server/websocket_manager.py", line 61, in start_streaming gpt-researcher-1 | report = await run_agent(task, report_type, report_source, source_urls, tone, websocket, headers) gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ gpt-researcher-1 | File "/usr/src/app/backend/server/websocket_manager.py", line 83, in run_agent gpt-researcher-1 | report = await researcher.run() gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^ gpt-researcher-1 | File "/usr/src/app/backend/report_type/detailed_report/detailed_report.py", line 48, in run gpt-researcher-1 | await self._initial_research() gpt-researcher-1 | File "/usr/src/app/backend/report_type/detailed_report/detailed_report.py", line 57, in _initial_research gpt-researcher-1 | await self.gpt_researcher.conduct_research() gpt-researcher-1 | File "/usr/src/app/gpt_researcher/agent.py", line 88, in conduct_research gpt-researcher-1 | self.agent, self.role = await choose_agent( gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^ gpt-researcher-1 | File "/usr/src/app/gpt_researcher/actions/agent_creator.py", line 45, in choose_agent gpt-researcher-1 | return await handle_json_error(response) gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ gpt-researcher-1 | File "/usr/src/app/gpt_researcher/actions/agent_creator.py", line 56, in handle_json_error gpt-researcher-1 | json_string = extract_json_with_regex(response) gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ gpt-researcher-1 | File "/usr/src/app/gpt_researcher/actions/agent_creator.py", line 73, in extract_json_with_regex gpt-researcher-1 | input("Press Enter to continue... extract_json_with_regex") gpt-researcher-1 | EOFError: EOF when reading a line gpt-researcher-1 | INFO: connection closed