assafelovic / gpt-researcher

LLM based autonomous agent that conducts local and web research on any topic and generates a comprehensive report with citations.
https://gptr.dev
Apache License 2.0
14.76k stars 1.97k forks source link

[SOLVED] KeyError: 'FAST_LLM' - Struggling to make the first search #900

Closed PieBru closed 3 weeks ago

PieBru commented 3 weeks ago

Describe the bug

I'm in a Debian 12 LXC on Proxmox VE 8.2 Followed the documentation and installed inside a python3 venv. Same result using the docker installation. I Don't have an OpenAI account, so I tryed with Ollama and Groq. Same result. Same error using export ENV_VAR=xxx or the same values in the .env file. I supplied a "normal" research argument. I leaved the research parameters on the form to their default. The error appears immediately after pressing the button.

To Reproduce Steps to reproduce the behavior:

cd Github
git clone https://github.com/assafelovic/gpt-researcher
cd gpt-researcher
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
export TAVILY_API_KEY=tvly-REDACTED
export RETRIEVER=tavily
export DOC_PATH=./my-docs

#export LLM_PROVIDER=ollama
export LLM_PROVIDER=groq

#export OLLAMA_BASE_URL="http://10.4.0.100:11434"
#export FAST_LLM=ollama:llama3
#export SMART_LLM=ollama:llama3 
#export TEMPERATURE=0.1
#export EMBEDDING_PROVIDER=ollama
#export OLLAMA_EMBEDDING_MODEL=nomic-embed-text

export GROQ_API_KEY=gsk_REDACTED
export FAST_LLM=groq:Mixtral-8x7b-32768
export SMART_LLM=groq:Mixtral-8x7b-32768 
export TEMPERATURE=0.1

python -m uvicorn main:app --reload --host "0.0.0.0" --port 8000     [--env-file ".env"]

firefox http://10.4.0.150:8000/
  1. See error log below.

Expected behavior Some kind of reply from the Web UI.

Screenshots If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

Smartphone (please complete the following information):

Additional context This is the terminal log using python3 venv installation:

INFO:     Loading environment from '.env'
INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO:     Started reloader process [9084] using StatReload
INFO:     Started server process [9086]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     10.4.0.123:47580 - "GET / HTTP/1.1" 200 OK
INFO:     10.4.0.123:47580 - "GET /site/styles.css HTTP/1.1" 200 OK
INFO:     10.4.0.123:47580 - "GET /static/gptr-logo.png HTTP/1.1" 200 OK
INFO:     ('10.4.0.123', 47598) - "WebSocket /ws" [accepted]
INFO:     connection open
Warning: Configuration '' not found. Using default configuration.
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/home/admin/Github/gpt-researcher/venv/lib/python3.11/site-packages/uvicorn/protocols/websockets/websockets_impl.py", line 242, in run_asgi
    result = await self.app(self.scope, self.asgi_receive, self.asgi_send)  # type: ignore[func-returns-value]
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/admin/Github/gpt-researcher/venv/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in __call__
    return await self.app(scope, receive, send)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/admin/Github/gpt-researcher/venv/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__
    await super().__call__(scope, receive, send)
  File "/home/admin/Github/gpt-researcher/venv/lib/python3.11/site-packages/starlette/applications.py", line 113, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/home/admin/Github/gpt-researcher/venv/lib/python3.11/site-packages/starlette/middleware/errors.py", line 152, in __call__
    await self.app(scope, receive, send)
  File "/home/admin/Github/gpt-researcher/venv/lib/python3.11/site-packages/starlette/middleware/cors.py", line 77, in __call__
    await self.app(scope, receive, send)
  File "/home/admin/Github/gpt-researcher/venv/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  File "/home/admin/Github/gpt-researcher/venv/lib/python3.11/site-packages/starlette/_exception_handler.py", line 62, in wrapped_app
    raise exc
  File "/home/admin/Github/gpt-researcher/venv/lib/python3.11/site-packages/starlette/_exception_handler.py", line 51, in wrapped_app
    await app(scope, receive, sender)
  File "/home/admin/Github/gpt-researcher/venv/lib/python3.11/site-packages/starlette/routing.py", line 715, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/home/admin/Github/gpt-researcher/venv/lib/python3.11/site-packages/starlette/routing.py", line 735, in app
    await route.handle(scope, receive, send)
  File "/home/admin/Github/gpt-researcher/venv/lib/python3.11/site-packages/starlette/routing.py", line 362, in handle
    await self.app(scope, receive, send)
  File "/home/admin/Github/gpt-researcher/venv/lib/python3.11/site-packages/starlette/routing.py", line 95, in app
    await wrap_app_handling_exceptions(app, session)(scope, receive, send)
  File "/home/admin/Github/gpt-researcher/venv/lib/python3.11/site-packages/starlette/_exception_handler.py", line 62, in wrapped_app
    raise exc
  File "/home/admin/Github/gpt-researcher/venv/lib/python3.11/site-packages/starlette/_exception_handler.py", line 51, in wrapped_app
    await app(scope, receive, sender)
  File "/home/admin/Github/gpt-researcher/venv/lib/python3.11/site-packages/starlette/routing.py", line 93, in app
    await func(session)
  File "/home/admin/Github/gpt-researcher/venv/lib/python3.11/site-packages/fastapi/routing.py", line 383, in app
    await dependant.call(**solved_result.values)
  File "/home/admin/Github/gpt-researcher/backend/server/server.py", line 142, in websocket_endpoint
    await handle_websocket_communication(websocket, manager)
  File "/home/admin/Github/gpt-researcher/backend/server/server_utils.py", line 117, in handle_websocket_communication
    await handle_start_command(websocket, data, manager)
  File "/home/admin/Github/gpt-researcher/backend/server/server_utils.py", line 28, in handle_start_command
    report = await manager.start_streaming(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/admin/Github/gpt-researcher/backend/server/websocket_manager.py", line 61, in start_streaming
    report = await run_agent(task, report_type, report_source, source_urls, tone, websocket, headers)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/admin/Github/gpt-researcher/backend/server/websocket_manager.py", line 95, in run_agent
    report = await researcher.run()
             ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/admin/Github/gpt-researcher/backend/report_type/basic_report/basic_report.py", line 30, in run
    researcher = GPTResearcher(
                 ^^^^^^^^^^^^^^
  File "/home/admin/Github/gpt-researcher/gpt_researcher/master/agent/master.py", line 41, in __init__
    self.cfg = Config(config_path)
               ^^^^^^^^^^^^^^^^^^^
  File "/home/admin/Github/gpt-researcher/gpt_researcher/config/config.py", line 27, in __init__
    value = self.convert_env_value(key, env_value, BaseConfig.__annotations__[key])
                                                   ~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^
KeyError: 'FAST_LLM'
INFO:     connection closed

Let me know if I can do some test to narrow down the problem. Thank you, Piero

kesamet commented 3 weeks ago

Created a PR to fix this: https://github.com/assafelovic/gpt-researcher/pull/902

PieBru commented 3 weeks ago

Solved with https://github.com/assafelovic/gpt-researcher/pull/902