[X] I have read the README and searched the existing issues.
System Info
如何调用python版本的openai-style api接口?
Reproduction
llamafactory-cli api examples/inference/qwen2_vl.yaml
Expected behavior
Visit http://localhost:8000/docs for API document.
INFO: Started server process [48005]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
看起来服务已经打开了,但是怎么用python调用呢?我在给的文档里没找到设置url的地方。比如文档里给出了:
from openai import OpenAI
client = OpenAI()
completion = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]
)
Reminder
System Info
如何调用python版本的openai-style api接口?
Reproduction
llamafactory-cli api examples/inference/qwen2_vl.yaml
Expected behavior
Visit http://localhost:8000/docs for API document. INFO: Started server process [48005] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit) 看起来服务已经打开了,但是怎么用python调用呢?我在给的文档里没找到设置url的地方。比如文档里给出了: from openai import OpenAI client = OpenAI()
completion = client.chat.completions.create( model="gpt-4o", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Hello!"} ] )
print(completion.choices[0].message)
那么如何设置model? 如何设置对应的url? 可以给个具体用法的示例吗?
Others
No response