InternLM / MindSearch

🔍 An LLM-based Multi-agent Framework of Web Search Engine (like Perplexity.ai Pro and SearchGPT)
Apache License 2.0
5.13k stars 518 forks source link

如何配置使用其他在线LLM服务? #21

Closed mijq closed 3 months ago

mijq commented 3 months ago

目前网上有很多服务商提供兼容OpenAI接口的大模型服务,可以免费使用各种开源LLM能力,包括Qwen2、GLM4、InternLM2.5等。 请问如何修改models文件才能对接使用这些开源LLM服务?

Harold-lkk commented 3 months ago

https://github.com/InternLM/MindSearch/blob/a0559d4d6402fd93a0b779aa822a1bb1398eb59a/mindsearch/agent/models.py#L27 如果符合 GPTAPI 规范,可以在 models 里加入 xxx_model = dict(type=GPTAPI, **kwargs(参数参考 https://github.com/InternLM/lagent/blob/main/lagent/llms/openai.py)) 启动时 model_foramt 传入 xxx_model 即可

mijq commented 3 months ago

修改models文件增加qwen2模型,修改如下:

qwen2 = dict(type=GPTAPI,
            model_type='Qwen/Qwen2-7B-Instruct',
            key = 'sk-xxxxxxxx',
            openai_api_base = 'https://api.siliconflow.cn/v1')

运行命令:

python -m mindsearch.app --lang zh --model_format qwen2 

然后通过前端查询问题时报错:

INFO:     127.0.0.1:42752 - "POST /solve HTTP/1.1" 200 OK
ERROR:asyncio:Future exception was never retrieved
future: <Future finished exception=JSONDecodeError('Extra data: line 1 column 5 (char 4)')>
Traceback (most recent call last):
  File "/home/uam/anaconda3/envs/demo/lib/python3.11/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/uam/MindSearch/mindsearch/app.py", line 68, in sync_generator_wrapper
    for response in agent.stream_chat(inputs):
  File "/home/uam/MindSearch/mindsearch/agent/mindsearch_agent.py", line 211, in stream_chat
    for model_state, response, _ in self.llm.stream_chat(
  File "/home/uam/anaconda3/envs/demo/lib/python3.11/site-packages/lagent/llms/openai.py", line 157, in stream_chat
    for text in self._stream_chat(messages, **gen_params):
  File "/home/uam/anaconda3/envs/demo/lib/python3.11/site-packages/lagent/llms/openai.py", line 288, in streaming
    response = json.loads(decoded)
               ^^^^^^^^^^^^^^^^^^^
  File "/home/uam/anaconda3/envs/demo/lib/python3.11/json/__init__.py", line 346, in loads
    return _default_decoder.decode(s)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/uam/anaconda3/envs/demo/lib/python3.11/json/decoder.py", line 340, in decode
    raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 1 column 5 (char 4)

这是咋回事?

ZwwWayne commented 3 months ago

看起来输出的格式不完全符合规范,可能是模型没法完全 follow 这套指令。InternLM2.5-Chat 是针对这个功能优化过的

Harold-lkk commented 3 months ago

https://github.com/InternLM/lagent/pull/217

mijq commented 3 months ago

InternLM/lagent#217

更新了lagent的代码,重新运行: python -m mindsearch.app --lang zh --model_format qwen2 streamlit run frontend/mindsearch_streamlit.py

在界面上提问“当前美国总统是哪位?”,FastApi依然报错:

INFO:     127.0.0.1:37200 - "POST /solve HTTP/1.1" 200 OK
ERROR:asyncio:Future exception was never retrieved
future: <Future finished exception=JSONDecodeError('Extra data: line 1 column 5 (char 4)')>
Traceback (most recent call last):
  File "/home/uam/anaconda3/envs/demo/lib/python3.11/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/uam/MindSearch/mindsearch/app.py", line 68, in sync_generator_wrapper
    for response in agent.stream_chat(inputs):
  File "/home/uam/MindSearch/mindsearch/agent/mindsearch_agent.py", line 211, in stream_chat
    for model_state, response, _ in self.llm.stream_chat(
  File "/home/uam/anaconda3/envs/demo/lib/python3.11/site-packages/lagent/llms/openai.py", line 159, in stream_chat
    for text in self._stream_chat(messages, **gen_params):
  File "/home/uam/anaconda3/envs/demo/lib/python3.11/site-packages/lagent/llms/openai.py", line 290, in streaming
    response = json.loads(decoded)
               ^^^^^^^^^^^^^^^^^^^
  File "/home/uam/anaconda3/envs/demo/lib/python3.11/json/__init__.py", line 346, in loads
    return _default_decoder.decode(s)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/uam/anaconda3/envs/demo/lib/python3.11/json/decoder.py", line 340, in decode
    raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 1 column 5 (char 4)
Harold-lkk commented 3 months ago

抱歉,我们现在在排查中

liujiangning30 commented 3 months ago

https://github.com/InternLM/MindSearch/pull/60 https://github.com/InternLM/lagent/pull/218

casuallyone commented 3 months ago

该项目中多次使用self.model_type.lower().startswith('qwen')语句来判断是否属于千问API模型,会造成本地使用vllm提供openai接口兼容的方式部署千问模型时出现一系列问题,可以在项目中处处进行修改(不建议),也可考虑将本地部署模型名称自定义为gpt-xxxxx字样即可。建议作者引入litellm来为各种大模型统一接口,litellm各种大模型接口的兼容性较好,同时还能够优化项目异步性能、更稳定提供结构化输出等各种便利。

Azure-Tang commented 4 days ago

该项目中多次使用self.model_type.lower().startswith('qwen')语句来判断是否属于千问API模型,会造成本地使用vllm提供openai接口兼容的方式部署千问模型时出现一系列问题,可以在项目中处处进行修改(不建议),也可考虑将本地部署模型名称自定义为gpt-xxxxx字样即可。建议作者引入litellm来为各种大模型统一接口,litellm各种大模型接口的兼容性较好,同时还能够优化项目异步性能、更稳定提供结构化输出等各种便利。

你好,可以解释一下你是怎么修改来访问本地起的vllm接口的呢,可以稍微详细的讲一下么,谢谢!🙏