InternLM / MindSearch

🔍 An LLM-based Multi-agent Framework of Web Search Engine (like Perplexity.ai Pro and SearchGPT)
https://mindsearch.netlify.app/
Apache License 2.0
5.04k stars 509 forks source link

使用internlm_client,报错:KeyError: 'choices' #118

Closed njzfw1024 closed 2 months ago

njzfw1024 commented 2 months ago

app.py: image

streamlit: image

techflag commented 2 months ago

链接qwen也是同样的错误

liujiangning30 commented 2 months ago

app.py: image

streamlit: image

Before using the model service, you need to start it with the command: lmdeploy serve api_server internlm/internlm2_5-7b-chat --server-port 23333.

liujiangning30 commented 2 months ago

链接qwen也是同样的错误

qwen只能走GPTAPI

chironliu commented 2 months ago

链接qwen也是同样的错误

qwen只能走GPTAPI

用qwen需要哪几步?

usgDHJAKJD commented 2 months ago

同求,调用qwen也出现类似问题

liujiangning30 commented 2 months ago

链接qwen也是同样的错误

qwen只能走GPTAPI

用qwen需要哪几步?

Option 1:

qwen_server = dict(type=LMDeployServer,
                       path='Qwen/Qwen2-7B-Instruct',
                       model_name='Qwen2-7B-Instruct',
                       ...)

Then, python -m mindsearch.app --lang cn --model_format qwen_server

Option 2: First, Launching with lmdeploy CLI: lmdeploy serve api_server Qwen/Qwen2-7B-Instruct --server-port 23333 Then, You can ping the backend model service as follows:

qwen_client = dict(type=LMDeployClient,
                       model_name='Qwen2-7B-Instruct',
                       url='http://127.0.0.1:23333',
                       ...)
Finally, `python -m mindsearch.app --lang cn --model_format qwen_client`

Option 3:

url = 'https://dashscope.aliyuncs.com/api/v1/services/aigc/text-generation/generation'
qwen = dict(type=GPTAPI,
            model_type='qwen-max-longcontext',
            key=os.environ.get('QWEN_API_KEY', 'YOUR QWEN API KEY'),
            openai_api_base=url,
            ...)

Prerequisites: Obtain an API key Then, python -m mindsearch.app --lang cn --model_format qwen

meigami0 commented 2 months ago

链接qwen也是同样的错误

qwen只能走GPTAPI

用qwen需要哪几步?

Option 1:

qwen_server = dict(type=LMDeployServer,
                       path='Qwen/Qwen2-7B-Instruct',
                       model_name='Qwen2-7B-Instruct',
                       ...)

Then, python -m mindsearch.app --lang cn --model_format qwen_server

Option 2: First, Launching with lmdeploy CLI: lmdeploy serve api_server Qwen/Qwen2-7B-Instruct --server-port 23333 Then, You can ping the backend model service as follows:

qwen_client = dict(type=LMDeployClient,
                       model_name='Qwen2-7B-Instruct',
                       url='http://127.0.0.1:23333',
                       ...)
Finally, `python -m mindsearch.app --lang cn --model_format qwen_client`

Option 3:

url = 'https://dashscope.aliyuncs.com/api/v1/services/aigc/text-generation/generation'
qwen = dict(type=GPTAPI,
            model_type='qwen-max-longcontext',
            key=os.environ.get('QWEN_API_KEY', 'YOUR QWEN API KEY'),
            openai_api_base=url,
            ...)

Prerequisites: Obtain an API key Then, python -m mindsearch.app --lang cn --model_format qwen

本地使用Option 1和2两种方式加载internlm2_5-7b-chat模型都报问题图中问题,定位发现LMDeployClient会匹配model_name,如果出现此类错误可以在启动LMDeployServer时添加--model_name参数并确保与LMDeployClient中设置的model_name一致。