hiyouga / LLaMA-Factory

Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
https://arxiv.org/abs/2403.13372
Apache License 2.0
34.29k stars 4.23k forks source link

Python如何调用Openai-style接口? #5877

Closed GasolSun36 closed 2 weeks ago

GasolSun36 commented 2 weeks ago

Reminder

System Info

如何调用python版本的openai-style api接口?

Reproduction

llamafactory-cli api examples/inference/qwen2_vl.yaml

Expected behavior

Visit http://localhost:8000/docs for API document. INFO: Started server process [48005] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit) 看起来服务已经打开了,但是怎么用python调用呢?我在给的文档里没找到设置url的地方。比如文档里给出了: from openai import OpenAI client = OpenAI()

completion = client.chat.completions.create( model="gpt-4o", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Hello!"} ] )

print(completion.choices[0].message)

那么如何设置model? 如何设置对应的url? 可以给个具体用法的示例吗?

Others

No response

hiyouga commented 2 weeks ago

https://github.com/hiyouga/LLaMA-Factory/blob/main/scripts/test_toolcall.py