InternLM / MindSearch

🔍 An LLM-based Multi-agent Framework of Web Search Engine (like Perplexity.ai Pro and SearchGPT)
https://mindsearch.netlify.app/
Apache License 2.0
4.55k stars 452 forks source link

No response from the chatbot - but no info to debug #163

Open vanetreg opened 1 month ago

vanetreg commented 1 month ago

Both setting up MindSearch API and Frontend completed correctly, using the provided python -m mindsearch.app --lang en --model_format internlm_server --search_engine DuckDuckGoSearch command. Frontend is looking as supposed, but after entering an instuction ( in English ), nothing happens for minutes (5+) beside processing shown. Neither frontend, nor any of the two terminals ( BE / FE ) contain any info, all looks OK according to terminals.

INFO:     Started server process [15880]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8002 (Press CTRL+C to quit)

and

  VITE v4.5.3  ready in 5172 ms

  ➜  Local:   http://localhost:8080/
  ➜  Network: http://172.26.224.1:8080/
  ➜  Network: http://192.168.0.15:8080/
  ➜  press h to show help

So I think some info / warning / error should be shown. Any solution is appreciated! :)

zxq9133 commented 4 weeks ago

you can try if it's working with "streamlit run frontend/mindsearch_streamlit.py"

I had a problem with the react launched web UI not being able to call the API interface, but it's works uning streamlit.

vanetreg commented 4 weeks ago

you can try if it's working with "streamlit run frontend/mindsearch_streamlit.py"

Trying with Streamlit at least I got an error:

D:\Projects\AI_testing\MindSearch\mindsearch-venv\Lib\site-packages\streamlit\watcher\local_sources_watcher.py:210: DeprecationWarning: Importing from `griffe.enumerations` is deprecated. Import from `griffe` directly instead.
  lambda m: list(m.__path__._path),
2024-08-17 09:57:23.907 Uncaught app exception
Traceback (most recent call last):
  File "D:\Projects\AI_testing\MindSearch\mindsearch-venv\Lib\site-packages\streamlit\runtime\scriptrunner\exec_code.py", line 85, in exec_func_with_error_handling
    result = func()
             ^^^^^^
  File "D:\Projects\AI_testing\MindSearch\mindsearch-venv\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 576, in code_to_exec
    exec(code, module.__dict__)
  File "D:\Projects\AI_testing\MindSearch\MindSearch\frontend\mindsearch_streamlit.py", line 319, in <module>
    main()
  File "D:\Projects\AI_testing\MindSearch\MindSearch\frontend\mindsearch_streamlit.py", line 314, in main
    update_chat(user_input)
  File "D:\Projects\AI_testing\MindSearch\MindSearch\frontend\mindsearch_streamlit.py", line 94, in update_chat
    for resp in streaming(raw_response):
  File "D:\Projects\AI_testing\MindSearch\MindSearch\frontend\mindsearch_streamlit.py", line 53, in streaming
    response = json.loads(decoded)
               ^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Atti\AppData\Local\Programs\Python\Python311\Lib\json\__init__.py", line 346, in loads
    return _default_decoder.decode(s)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Atti\AppData\Local\Programs\Python\Python311\Lib\json\decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Atti\AppData\Local\Programs\Python\Python311\Lib\json\decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
liujiangning30 commented 3 weeks ago

Both setting up MindSearch API and Frontend completed correctly, using the provided python -m mindsearch.app --lang en --model_format internlm_server --search_engine DuckDuckGoSearch command. Frontend is looking as supposed, but after entering an instuction ( in English ), nothing happens for minutes (5+) beside processing shown. Neither frontend, nor any of the two terminals ( BE / FE ) contain any info, all looks OK according to terminals.

INFO:     Started server process [15880]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8002 (Press CTRL+C to quit)

and

  VITE v4.5.3  ready in 5172 ms

  ➜  Local:   http://localhost:8080/
  ➜  Network: http://172.26.224.1:8080/
  ➜  Network: http://192.168.0.15:8080/
  ➜  press h to show help

So I think some info / warning / error should be shown. Any solution is appreciated! :)

Sorry for the late response! On the first request, the MindSearch service starts the model service in the background. This means that it may be necessary to download the model file from the hf repository. If you see the log HINT: Please open http://0.0.0.0:23333 in a browser for detailed api usage!!! in the background., it means that the model service was started successfully.

vanetreg commented 3 weeks ago

On the first request, the MindSearch service starts the model service in the background. This means that it may be necessary to download the model file from the hf repository. If you see the log HINT: Please open http://0.0.0.0:23333 in a browser for detailed api usage!!! in the background., it means that the model service was started successfully.

  • I haven't found any info here and in the files about how-to "download the model file from the hf repository". Pls. add some to README.md or elsewhere what makes sense (and a link from README.md)
  • as I wrote, I don't see such hints like you quoted
  • I don't reach localhost:23333
  • I'm on a Windows 10 PC
liujiangning30 commented 3 weeks ago

Starting the modeling service requires the lmdeploy toolkit. Please refer to https://github.com/InternLM/lmdeploy/blob/main/docs/en/installation.md to ensure all requirements are met.

vanetreg commented 3 weeks ago

Starting the modeling service requires the lmdeploy toolkit. Please refer to https://github.com/InternLM/lmdeploy/blob/main/docs/en/installation.md to ensure all requirements are met.

Pls. include this info into README.md.

vanetreg commented 3 weeks ago

I modified in models.py as:

gpt4 = dict(type=GPTAPI,
            model_type='gpt-4o-mini',
            key=os.environ.get('OPENAI_API_KEY', 'YOUR OPENAI API KEY'),
            openai_api_base=os.environ.get('OPENAI_API_BASE', 'https://api.openai.com/v1/chat/completions'),
            )

importing API key with dotenv from .env, ( so using gpt-4o-mini for not burning too much money )

and using command: python -m mindsearch.app --lang en --model_format gpt4 --search_engine DuckDuckGoSearch but in browser (Chrome), after asking chatbot and waiting for 5+ minutes, still nothing happens other than progress is shown None of the back- and frontend terminals are showing any useful info other than everything is green and OK.

Running: python -m mindsearch.terminal gives:

PS D:\Projects\AI_testing\MindSearch\MindSearch> python -m mindsearch.terminal
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "D:\Projects\AI_testing\MindSearch\MindSearch\mindsearch\terminal.py", line 15, in <module>
    llm = LMDeployServer(path='internlm/internlm2_5-7b-chat',
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Projects\AI_testing\MindSearch\mindsearch-venv\Lib\site-packages\lagent\llms\lmdeploy_wrapper.py", line 311, in __init__
    self.client = lmdeploy.serve(
                  ^^^^^^^^^^^^^^^
  File "D:\Projects\AI_testing\MindSearch\mindsearch-venv\Lib\site-packages\lmdeploy\api.py", line 159, in serve
    backend_config = autoget_backend_config(model_path, backend_config)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Projects\AI_testing\MindSearch\mindsearch-venv\Lib\site-packages\lmdeploy\archs.py", line 91, in autoget_backend_config
    backend = autoget_backend(model_path)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Projects\AI_testing\MindSearch\mindsearch-venv\Lib\site-packages\lmdeploy\archs.py", line 42, in autoget_backend
    from lmdeploy.turbomind.supported_models import \
  File "D:\Projects\AI_testing\MindSearch\mindsearch-venv\Lib\site-packages\lmdeploy\turbomind\__init__.py", line 22, in 
<module>
    bootstrap()
  File "D:\Projects\AI_testing\MindSearch\mindsearch-venv\Lib\site-packages\lmdeploy\turbomind\__init__.py", line 15, in 
bootstrap
    assert CUDA_PATH is not None, 'Can not find $env:CUDA_PATH'
           ^^^^^^^^^^^^^^^^^^^^^
AssertionError: Can not find $env:CUDA_PATH
liujiangning30 commented 3 weeks ago

Ensure that you have filled in the complete path here