Closed njzfw1024 closed 2 months ago
链接qwen也是同样的错误
app.py:
streamlit:
Before using the model service, you need to start it with the command: lmdeploy serve api_server internlm/internlm2_5-7b-chat --server-port 23333
.
链接qwen也是同样的错误
qwen只能走GPTAPI
链接qwen也是同样的错误
qwen只能走GPTAPI
用qwen需要哪几步?
同求,调用qwen也出现类似问题
链接qwen也是同样的错误
qwen只能走GPTAPI
用qwen需要哪几步?
Option 1:
qwen_server = dict(type=LMDeployServer,
path='Qwen/Qwen2-7B-Instruct',
model_name='Qwen2-7B-Instruct',
...)
Then, python -m mindsearch.app --lang cn --model_format qwen_server
Option 2:
First, Launching with lmdeploy CLI:
lmdeploy serve api_server Qwen/Qwen2-7B-Instruct --server-port 23333
Then, You can ping the backend model service as follows:
qwen_client = dict(type=LMDeployClient,
model_name='Qwen2-7B-Instruct',
url='http://127.0.0.1:23333',
...)
Finally, `python -m mindsearch.app --lang cn --model_format qwen_client`
Option 3:
url = 'https://dashscope.aliyuncs.com/api/v1/services/aigc/text-generation/generation'
qwen = dict(type=GPTAPI,
model_type='qwen-max-longcontext',
key=os.environ.get('QWEN_API_KEY', 'YOUR QWEN API KEY'),
openai_api_base=url,
...)
Prerequisites: Obtain an API key
Then, python -m mindsearch.app --lang cn --model_format qwen
链接qwen也是同样的错误
qwen只能走GPTAPI
用qwen需要哪几步?
Option 1:
qwen_server = dict(type=LMDeployServer, path='Qwen/Qwen2-7B-Instruct', model_name='Qwen2-7B-Instruct', ...)
Then,
python -m mindsearch.app --lang cn --model_format qwen_server
Option 2: First, Launching with lmdeploy CLI:
lmdeploy serve api_server Qwen/Qwen2-7B-Instruct --server-port 23333
Then, You can ping the backend model service as follows:qwen_client = dict(type=LMDeployClient, model_name='Qwen2-7B-Instruct', url='http://127.0.0.1:23333', ...) Finally, `python -m mindsearch.app --lang cn --model_format qwen_client`
Option 3:
url = 'https://dashscope.aliyuncs.com/api/v1/services/aigc/text-generation/generation' qwen = dict(type=GPTAPI, model_type='qwen-max-longcontext', key=os.environ.get('QWEN_API_KEY', 'YOUR QWEN API KEY'), openai_api_base=url, ...)
Prerequisites: Obtain an API key Then,
python -m mindsearch.app --lang cn --model_format qwen
本地使用Option 1和2两种方式加载internlm2_5-7b-chat模型都报问题图中问题,定位发现LMDeployClient会匹配model_name,如果出现此类错误可以在启动LMDeployServer时添加--model_name参数并确保与LMDeployClient中设置的model_name一致。
app.py:
streamlit: