vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
30.05k stars 4.54k forks source link

api_server.py: error: unrecognized arguments: --tool-use-prompt-template --enable-api-tools --enable-auto-tool-choice #5730

Open lk1983823 opened 4 months ago

lk1983823 commented 4 months ago

My vllm version is 0.5.0 post1. I want to make qwen2:7b-instruct model to have functional calling enabled. So following the issue https://github.com/vllm-project/vllm/pull/5649 here, I run the command

python -m vllm.entrypoints.openai.api_server --model /home/asus/autodl-tmp/qwen/Qwen2-7B-Instruct --tool-use-prompt-template /home/asus/autodl-tmp/examples/chatml.jinja --enable-api-tools --enable-auto-tool-choice

But it shows api_server.py: error: unrecognized arguments: --tool-use-prompt-template --enable-api-tools --enable-auto-tool-choice

image

K-Mistele commented 4 months ago

5649 is a draft of a pull request. It is a work-in-progress, and has not been merged into vLLM's codebase. Any capabilities and features present in that branch, whether they are a work-in-progress or completed, will not be available in a vLLM release unless/until the pull request is approved and merged from the Constellate AI fork into the vLLM project's codebase.

K-Mistele commented 4 months ago

@lk1983823 please feel free to close this issue :)

github-actions[bot] commented 2 weeks ago

This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!