assafelovic / gpt-researcher

GPT based autonomous agent that does online comprehensive research on any given topic
https://gptr.dev
MIT License
12.98k stars 1.61k forks source link

Last master any query exception in cli and web api mode #579

Closed slavonnet closed 3 weeks ago

slavonnet commented 3 weeks ago

Error log. Looks like problem in json

(venv) root@ai:/home/venv/gpt-researcher# python cli.py "Dogs or cat?" --report_type outline_report
WARNING:root:USER_AGENT environment variable not set, consider setting it to identify your requests.
🔎 Starting the research task for 'Dogs or cat?'...
Error choosing agent: Expecting value: line 1 column 1 (char 0)
Default Agent
response :  [""Are dogs or cats more popular pets globally on June 07, 2024?"", ""What are the top 5 cities for dog owners in the United States and how do they compare to cat owners?"", ""Do studies suggest that dogs or cats have a greater impact on mental health in urban areas like New York City and Los Angeles?""]
Traceback (most recent call last):
  File "/home/venv/gpt-researcher/cli.py", line 93, in <module>
    asyncio.run(main(args))
  File "/usr/lib/python3.12/asyncio/runners.py", line 194, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/asyncio/base_events.py", line 687, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "/home/venv/gpt-researcher/cli.py", line 79, in main
    await researcher.conduct_research()
  File "/home/venv/gpt-researcher/gpt_researcher/master/agent.py", line 96, in conduct_research
    context = await self.__get_context_by_search(self.query)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/venv/gpt-researcher/gpt_researcher/master/agent.py", line 177, in __get_context_by_search
    sub_queries = await get_sub_queries(query=query, agent_role_prompt=self.role,
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/venv/gpt-researcher/gpt_researcher/master/actions.py", line 111, in get_sub_queries
    sub_queries = json.loads(response)
                  ^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/json/__init__.py", line 346, in loads
    return _default_decoder.decode(s)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/json/decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/json/decoder.py", line 353, in raw_decode
    obj, end = self.scan_once(s, idx)
               ^^^^^^^^^^^^^^^^^^^^^^

last master

(venv) root@ai:/home/venv/gpt-researcher# git log | head
commit 6d98a9e938d4d2bcffbd040566c9c2fc34b66340
Merge: 5f6c57b e390ab4
Author: Assaf Elovic <assaf.elovic@gmail.com>
Date:   Thu Jun 6 17:13:11 2024 +0300

    Merge pull request #572 from refeed/fix_typo

    actions.py: Fix typo

commit e390ab416144ea8c64adf669b9e45fa08e6e90d3

my .env

#OPENAI_API_KEY=
TAVILY_API_KEY="tvly-XXXXX"
#LANGCHAIN_API_KEY=
DOC_PATH=./my-docs
# RETRIEVER=bing
MAX_SUBTOPICS=5

# Use ollama for both, LLM and EMBEDDING provider
LLM_PROVIDER=ollama

# Ollama endpoint to use
OLLAMA_BASE_URL=http://localhost:11434

# Specify one of the LLM models supported by Ollama
FAST_LLM_MODEL=llama3
# Specify one of the LLM models supported by Ollama
SMART_LLM_MODEL=llama3
# The temperature to use, defaults to 0.55
TEMPERATURE=0.55

EMBEDDING_PROVIDER=ollama
# Specify one of the embedding models supported by Ollama
OLLAMA_EMBEDDING_MODEL=evilfreelancer/enbeddrus

USER_AGENT="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 YaBrowser/24.4.0.0 Safari/537.36"
slavonnet commented 3 weeks ago

change llama3 to other fix issue

assafelovic commented 3 weeks ago

Yup @slavonnet this is because not all LLMs have the same output quality and since we're hoping for list response it fails with some LLMs. This is why we recommend GPT with this project.