vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
27.54k stars 4.05k forks source link

[Bug]: Tool calling on Llama 3.1/3.2 fails with KeyError: '<tool_call>' #8912

Closed Xaenalt closed 1 day ago

Xaenalt commented 1 day ago

Your current environment

The output of `python collect_env.py` ```text Collecting environment information... PyTorch version: 2.4.0+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: Fedora release 40 (Forty) (x86_64) GCC version: (GCC) 14.2.1 20240912 (Red Hat 14.2.1-3) Clang version: 18.1.8 (Fedora 18.1.8-1.fc40) CMake version: version 3.28.2 Libc version: glibc-2.39 Python version: 3.12.6 (main, Sep 9 2024, 00:00:00) [GCC 14.2.1 20240801 (Red Hat 14.2.1-1)] (64-bit runtime) Python platform: Linux-6.10.10-200.fc40.x86_64-x86_64-with-glibc2.39 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090 Nvidia driver version: 560.35.03 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 39 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 32 On-line CPU(s) list: 0-31 Vendor ID: GenuineIntel Model name: 13th Gen Intel(R) Core(TM) i9-13900KF CPU family: 6 Model: 183 Thread(s) per core: 2 Core(s) per socket: 24 Socket(s): 1 Stepping: 1 CPU(s) scaling MHz: 33% CPU max MHz: 5800.0000 CPU min MHz: 800.0000 BogoMIPS: 5990.40 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities Virtualization: VT-x L1d cache: 896 KiB (24 instances) L1i cache: 1.3 MiB (24 instances) L2 cache: 32 MiB (12 instances) L3 cache: 36 MiB (1 instance) NUMA node(s): 1 NUMA node0 CPU(s): 0-31 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Reg file data sampling: Mitigation; Clear Register File Vulnerability Retbleed: Not affected Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.26.4 [pip3] nvidia-cublas-cu12==12.1.3.1 [pip3] nvidia-cuda-cupti-cu12==12.1.105 [pip3] nvidia-cuda-nvrtc-cu12==12.1.105 [pip3] nvidia-cuda-runtime-cu12==12.1.105 [pip3] nvidia-cudnn-cu12==9.1.0.70 [pip3] nvidia-cufft-cu12==11.0.2.54 [pip3] nvidia-curand-cu12==10.3.2.106 [pip3] nvidia-cusolver-cu12==11.4.5.107 [pip3] nvidia-cusparse-cu12==12.1.0.106 [pip3] nvidia-ml-py==12.560.30 [pip3] nvidia-nccl-cu12==2.20.5 [pip3] nvidia-nvjitlink-cu12==12.6.68 [pip3] nvidia-nvtx-cu12==12.1.105 [pip3] pyzmq==26.2.0 [pip3] torch==2.4.0 [pip3] torchvision==0.19.0 [pip3] transformers==4.45.1 [pip3] triton==3.0.0 [conda] Could not collect ROCM Version: Could not collect Neuron SDK Version: N/A vLLM Version: 0.6.1.dev238+ge2c6e0a82 vLLM Build Flags: CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled GPU Topology: GPU0 CPU Affinity NUMA Affinity GPU NUMA ID GPU0 X 0-31 0 N/A Legend: X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge) PIX = Connection traversing at most a single PCIe bridge NV# = Connection traversing a bonded set of # NVLinks ```

Model Input Dumps

Received request chat-d7b6885bb0a14fa3b3da22dbbd580234: prompt: '<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nEnvironment: ipython\nCutting Knowledge Date: December 2023\nToday Date: 27 Sep 2024\n\n<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nGiven the following functions, please respond with a JSON for a function call with its proper arguments that best answers the given prompt.\n\nRespond in the format {"name": function name, "parameters": dictionary of argument name and its value}.Do not use variables.\n\n{\n    "type": "function",\n    "function": {\n        "name": "tavily_search_results_json",\n        "description": "A search engine optimized for comprehensive, accurate, and trusted results. Useful for when you need to answer questions about current events. Input should be a search query.",\n        "parameters": {\n            "properties": {\n                "query": {\n                    "description": "search query to look up",\n                    "type": "string"\n                }\n            },\n            "required": [\n                "query"\n            ],\n            "type": "object"\n        }\n    }\n}\n\nTell me about langgraph<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n', params: SamplingParams(n=1, best_of=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.7, top_p=1.0, top_k=-1, min_p=0.0, seed=None, use_beam_search=False, length_penalty=1.0, early_stopping=False, stop=[], stop_token_ids=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=130854, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None), prompt_token_ids: [128000, 128006, 9125, 128007, 271, 13013, 25, 6125, 27993, 198, 38766, 1303, 33025, 2696, 25, 6790, 220, 2366, 18, 198, 15724, 2696, 25, 220, 1544, 17907, 220, 2366, 19, 271, 128009, 128006, 882, 128007, 271, 22818, 279, 2768, 5865, 11, 4587, 6013, 449, 264, 4823, 369, 264, 734, 1650, 449, 1202, 6300, 6105, 430, 1888, 11503, 279, 2728, 10137, 382, 66454, 304, 279, 3645, 5324, 609, 794, 734, 836, 11, 330, 14105, 794, 11240, 315, 5811, 836, 323, 1202, 907, 7966, 5519, 539, 1005, 7482, 382, 517, 262, 330, 1337, 794, 330, 1723, 761, 262, 330, 1723, 794, 341, 286, 330, 609, 794, 330, 83, 402, 1570, 10947, 13888, 9643, 761, 286, 330, 4789, 794, 330, 32, 2778, 4817, 34440, 369, 16195, 11, 13687, 11, 323, 22542, 3135, 13, 51612, 369, 994, 499, 1205, 311, 4320, 4860, 922, 1510, 4455, 13, 5688, 1288, 387, 264, 2778, 3319, 10560, 286, 330, 14105, 794, 341, 310, 330, 13495, 794, 341, 394, 330, 1663, 794, 341, 504, 330, 4789, 794, 330, 1874, 3319, 311, 1427, 709, 761, 504, 330, 1337, 794, 330, 928, 702, 394, 457, 310, 1173, 310, 330, 6413, 794, 2330, 394, 330, 1663, 702, 310, 3291, 310, 330, 1337, 794, 330, 1735, 702, 286, 457, 262, 457, 633, 41551, 757, 922, 8859, 4539, 128009, 128006, 78191, 128007, 271], lora_request: None, prompt_adapter_request: None.

🐛 Describe the bug

Attempting to query a Llama-3.2 model with tools via langgraph causes an internal server error in vLLM:

import os
from typing import Annotated

from langchain_openai import ChatOpenAI
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_core.messages import BaseMessage
from typing_extensions import TypedDict

from langgraph.graph import StateGraph
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition

os.environ["TAVILY_API_KEY"] = "KEY"

tool = TavilySearchResults(max_results=2)
tools = [tool]

class State(TypedDict):
    messages: Annotated[list, add_messages]

graph_builder = StateGraph(State)

llm = ChatOpenAI(model="Llama-3.2", base_url="http://localhost:8000/v1", api_key="NONE")
llm_with_tools = llm.bind_tools(tools=tools)

def chatbot(state: State):
    return {"messages": [llm_with_tools.invoke(state["messages"])]}

graph_builder.add_node("chatbot", chatbot)

tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)

graph_builder.add_conditional_edges(
    "chatbot",
    tools_condition,
)
graph_builder.add_edge("tools", "chatbot")
graph_builder.set_entry_point("chatbot")
graph = graph_builder.compile()

from langchain_core.messages import BaseMessage

while True:
    user_input = input("User: ")
    if user_input.lower() in ["quit", "exit", "q"]:
        print("Goodbye!")
        break
    for event in graph.stream({"messages": [("user", user_input)]}):
        for value in event.values():
            if isinstance(value["messages"][-1], BaseMessage):
                print("Assistant:", value["messages"][-1].content)

Input:

User: Tell me about langgraph

Trace from vLLM:

VLLM_LOGGING_LEVEL=DEBUG VLLM_TRACE_FUNCTION=1 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True vllm serve ../Llama-3.2-3B-Instruct-quantized.w8a8 --served-model-name Llama-3.2 --enable-auto-tool-choice --tool-call-parser hermes --gpu-memory-utilization 0.8
INFO 09-27 14:05:08 api_server.py:526] vLLM API server version 0.6.1.dev238+ge2c6e0a82
INFO 09-27 14:05:08 api_server.py:527] args: Namespace(model_tag='../Llama-3.2-3B-Instruct-quantized.w8a8', config='', host=None, port=8000, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, enable_auto_tool_choice=True, tool_call_parser='hermes', model='../Llama-3.2-3B-Instruct-quantized.w8a8', tokenizer=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, download_dir=None, load_format='auto', config_format='auto', dtype='auto', kv_cache_dtype='auto', quantization_param_path=None, max_model_len=None, guided_decoding_backend='outlines', distributed_executor_backend=None, worker_use_ray=False, pipeline_parallel_size=1, tensor_parallel_size=1, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=16, enable_prefix_caching=False, disable_sliding_window=False, use_v2_block_manager=False, num_lookahead_slots=0, seed=0, swap_space=4, cpu_offload_gb=0, gpu_memory_utilization=0.8, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_seqs=256, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, enforce_eager=False, max_context_len_to_capture=None, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, mm_processor_kwargs=None, enable_lora=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', num_scheduler_steps=1, multi_step_stream_outputs=False, scheduler_delay_factor=0.0, enable_chunked_prefill=None, speculative_model=None, speculative_model_quantization=None, num_speculative_tokens=None, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=['Llama-3.2'], qlora_adapter_name_or_path=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, override_neuron_config=None, disable_log_requests=False, max_log_len=None, disable_fastapi_docs=False, dispatch_function=<function serve at 0x7fad5c996480>)
INFO 09-27 14:05:08 api_server.py:164] Multiprocessing frontend to use ipc:///tmp/ca3ef9ef-dd56-451c-b403-b86e02bda5cd for IPC Path.
INFO 09-27 14:05:08 api_server.py:177] Started engine process with PID 187806
WARNING 09-27 14:05:08 arg_utils.py:930] Chunked prefill is enabled by default for models with max_model_len > 32K. Currently, chunked prefill might not work with some features or models. If you encounter any issues, please disable chunked prefill by setting --enable-chunked-prefill=False.
INFO 09-27 14:05:08 config.py:1010] Chunked prefill is enabled with max_num_batched_tokens=512.
WARNING 09-27 14:05:10 arg_utils.py:930] Chunked prefill is enabled by default for models with max_model_len > 32K. Currently, chunked prefill might not work with some features or models. If you encounter any issues, please disable chunked prefill by setting --enable-chunked-prefill=False.
INFO 09-27 14:05:10 config.py:1010] Chunked prefill is enabled with max_num_batched_tokens=512.
INFO 09-27 14:05:10 llm_engine.py:226] Initializing an LLM engine (v0.6.1.dev238+ge2c6e0a82) with config: model='../Llama-3.2-3B-Instruct-quantized.w8a8', speculative_config=None, tokenizer='../Llama-3.2-3B-Instruct-quantized.w8a8', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=131072, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=compressed-tensors, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=Llama-3.2, use_v2_block_manager=False, num_scheduler_steps=1, multi_step_stream_outputs=False, enable_prefix_caching=False, use_async_output_proc=True, use_cached_outputs=True, mm_processor_kwargs=None)
WARNING 09-27 14:05:10 logger.py:147] VLLM_TRACE_FUNCTION is enabled. It will record every function executed by Python. This will slow down the code. It is suggested to be used for debugging hang or crashes only.
INFO 09-27 14:05:10 logger.py:151] Trace frame log is saved to /tmp/vllm/vllm-instance-7547df44dc6c46289dd9c235d2f12051/VLLM_TRACE_FUNCTION_for_process_187806_thread_140205046585152_at_2024-09-27_14:05:10.615323.log
DEBUG 09-27 14:05:11 parallel_state.py:937] world_size=1 rank=0 local_rank=0 distributed_init_method=tcp://192.168.2.117:53225 backend=nccl
[W927 14:05:11.375200449 CUDAAllocatorConfig.h:28] Warning: expandable_segments not supported on this platform (function operator())
INFO 09-27 14:05:11 model_runner.py:1014] Starting to load model ../Llama-3.2-3B-Instruct-quantized.w8a8...
Loading safetensors checkpoint shards:   0% Completed | 0/1 [00:00<?, ?it/s]
Loading safetensors checkpoint shards: 100% Completed | 1/1 [00:00<00:00,  2.11it/s]
Loading safetensors checkpoint shards: 100% Completed | 1/1 [00:00<00:00,  2.11it/s]

INFO 09-27 14:05:12 model_runner.py:1025] Loading model weights took 3.3933 GB
INFO 09-27 14:05:13 gpu_executor.py:122] # GPU blocks: 8319, # CPU blocks: 2340
INFO 09-27 14:05:14 model_runner.py:1329] Capturing the model for CUDA graphs. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI.
INFO 09-27 14:05:14 model_runner.py:1333] CUDA graphs can take additional 1~3 GiB memory per GPU. If you are running out of memory, consider decreasing `gpu_memory_utilization` or enforcing eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage.
DEBUG 09-27 14:05:18 client.py:164] Waiting for output from MQLLMEngine.
DEBUG 09-27 14:05:28 client.py:164] Waiting for output from MQLLMEngine.
INFO 09-27 14:05:29 model_runner.py:1456] Graph capturing finished in 15 secs.
DEBUG 09-27 14:05:30 engine.py:150] Starting Startup Loop.
DEBUG 09-27 14:05:30 engine.py:152] Starting heartbeat thread
DEBUG 09-27 14:05:30 engine.py:154] Starting Engine Loop.
INFO 09-27 14:05:30 api_server.py:230] vLLM to use /tmp/tmp3vo4h881 as PROMETHEUS_MULTIPROC_DIR
INFO 09-27 14:05:30 serving_chat.py:77] "auto" tool choice has been enabled please note that while the parallel_tool_calls client option is preset for compatibility reasons, it will be ignored.
WARNING 09-27 14:05:30 serving_embedding.py:189] embedding_mode is False. Embedding API will not work.
INFO 09-27 14:05:30 launcher.py:19] Available routes are:
INFO 09-27 14:05:30 launcher.py:27] Route: /openapi.json, Methods: HEAD, GET
INFO 09-27 14:05:30 launcher.py:27] Route: /docs, Methods: HEAD, GET
INFO 09-27 14:05:30 launcher.py:27] Route: /docs/oauth2-redirect, Methods: HEAD, GET
INFO 09-27 14:05:30 launcher.py:27] Route: /redoc, Methods: HEAD, GET
INFO 09-27 14:05:30 launcher.py:27] Route: /health, Methods: GET
INFO 09-27 14:05:30 launcher.py:27] Route: /tokenize, Methods: POST
INFO 09-27 14:05:30 launcher.py:27] Route: /detokenize, Methods: POST
INFO 09-27 14:05:30 launcher.py:27] Route: /v1/models, Methods: GET
INFO 09-27 14:05:30 launcher.py:27] Route: /version, Methods: GET
INFO 09-27 14:05:30 launcher.py:27] Route: /v1/chat/completions, Methods: POST
INFO 09-27 14:05:30 launcher.py:27] Route: /v1/completions, Methods: POST
INFO 09-27 14:05:30 launcher.py:27] Route: /v1/embeddings, Methods: POST
INFO:     Started server process [187762]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
DEBUG 09-27 14:05:32 client.py:148] Heartbeat successful.
DEBUG 09-27 14:05:34 client.py:148] Heartbeat successful.
DEBUG 09-27 14:05:36 client.py:148] Heartbeat successful.
DEBUG 09-27 14:05:38 client.py:148] Heartbeat successful.
DEBUG 09-27 14:05:38 client.py:164] Waiting for output from MQLLMEngine.
DEBUG 09-27 14:05:40 client.py:148] Heartbeat successful.
INFO 09-27 14:05:40 metrics.py:351] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 0 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 0.0%, CPU KV cache usage: 0.0%.
DEBUG 09-27 14:05:40 engine.py:212] Waiting for new requests in engine loop.
DEBUG 09-27 14:05:42 client.py:148] Heartbeat successful.
INFO 09-27 14:05:42 logger.py:36] Received request chat-d007392b100e4442a2e278c8918b2b0e: prompt: '<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nEnvironment: ipython\nCutting Knowledge Date: December 2023\nToday Date: 27 Sep 2024\n\n<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nGiven the following functions, please respond with a JSON for a function call with its proper arguments that best answers the given prompt.\n\nRespond in the format {"name": function name, "parameters": dictionary of argument name and its value}.Do not use variables.\n\n{\n    "type": "function",\n    "function": {\n        "name": "tavily_search_results_json",\n        "description": "A search engine optimized for comprehensive, accurate, and trusted results. Useful for when you need to answer questions about current events. Input should be a search query.",\n        "parameters": {\n            "properties": {\n                "query": {\n                    "description": "search query to look up",\n                    "type": "string"\n                }\n            },\n            "required": [\n                "query"\n            ],\n            "type": "object"\n        }\n    }\n}\n\nTell me about langgraph<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n', params: SamplingParams(n=1, best_of=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.7, top_p=1.0, top_k=-1, min_p=0.0, seed=None, use_beam_search=False, length_penalty=1.0, early_stopping=False, stop=[], stop_token_ids=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=130854, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None), prompt_token_ids: [128000, 128006, 9125, 128007, 271, 13013, 25, 6125, 27993, 198, 38766, 1303, 33025, 2696, 25, 6790, 220, 2366, 18, 198, 15724, 2696, 25, 220, 1544, 17907, 220, 2366, 19, 271, 128009, 128006, 882, 128007, 271, 22818, 279, 2768, 5865, 11, 4587, 6013, 449, 264, 4823, 369, 264, 734, 1650, 449, 1202, 6300, 6105, 430, 1888, 11503, 279, 2728, 10137, 382, 66454, 304, 279, 3645, 5324, 609, 794, 734, 836, 11, 330, 14105, 794, 11240, 315, 5811, 836, 323, 1202, 907, 7966, 5519, 539, 1005, 7482, 382, 517, 262, 330, 1337, 794, 330, 1723, 761, 262, 330, 1723, 794, 341, 286, 330, 609, 794, 330, 83, 402, 1570, 10947, 13888, 9643, 761, 286, 330, 4789, 794, 330, 32, 2778, 4817, 34440, 369, 16195, 11, 13687, 11, 323, 22542, 3135, 13, 51612, 369, 994, 499, 1205, 311, 4320, 4860, 922, 1510, 4455, 13, 5688, 1288, 387, 264, 2778, 3319, 10560, 286, 330, 14105, 794, 341, 310, 330, 13495, 794, 341, 394, 330, 1663, 794, 341, 504, 330, 4789, 794, 330, 1874, 3319, 311, 1427, 709, 761, 504, 330, 1337, 794, 330, 928, 702, 394, 457, 310, 1173, 310, 330, 6413, 794, 2330, 394, 330, 1663, 702, 310, 3291, 310, 330, 1337, 794, 330, 1735, 702, 286, 457, 262, 457, 633, 41551, 757, 922, 8859, 4539, 128009, 128006, 78191, 128007, 271], lora_request: None, prompt_adapter_request: None.
INFO 09-27 14:05:42 engine.py:288] Added request chat-d007392b100e4442a2e278c8918b2b0e.
DEBUG 09-27 14:05:43 llm_engine.py:1328] Stopping remote worker execution loop.
INFO:     ::1:60296 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/home/user/git/test/venv/lib/python3.12/site-packages/uvicorn/protocols/http/httptools_impl.py", line 401, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/git/test/venv/lib/python3.12/site-packages/uvicorn/middleware/proxy_headers.py", line 70, in __call__
    return await self.app(scope, receive, send)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/git/test/venv/lib/python3.12/site-packages/fastapi/applications.py", line 1054, in __call__
    await super().__call__(scope, receive, send)
  File "/home/user/git/test/venv/lib/python3.12/site-packages/starlette/applications.py", line 113, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/home/user/git/test/venv/lib/python3.12/site-packages/starlette/middleware/errors.py", line 187, in __call__
    raise exc
  File "/home/user/git/test/venv/lib/python3.12/site-packages/starlette/middleware/errors.py", line 165, in __call__
    await self.app(scope, receive, _send)
  File "/home/user/git/test/venv/lib/python3.12/site-packages/starlette/middleware/cors.py", line 85, in __call__
    await self.app(scope, receive, send)
  File "/home/user/git/test/venv/lib/python3.12/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  File "/home/user/git/test/venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 62, in wrapped_app
    raise exc
  File "/home/user/git/test/venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 51, in wrapped_app
    await app(scope, receive, sender)
  File "/home/user/git/test/venv/lib/python3.12/site-packages/starlette/routing.py", line 715, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/home/user/git/test/venv/lib/python3.12/site-packages/starlette/routing.py", line 735, in app
    await route.handle(scope, receive, send)
  File "/home/user/git/test/venv/lib/python3.12/site-packages/starlette/routing.py", line 288, in handle
    await self.app(scope, receive, send)
  File "/home/user/git/test/venv/lib/python3.12/site-packages/starlette/routing.py", line 76, in app
    await wrap_app_handling_exceptions(app, request)(scope, receive, send)
  File "/home/user/git/test/venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 62, in wrapped_app
    raise exc
  File "/home/user/git/test/venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 51, in wrapped_app
    await app(scope, receive, sender)
  File "/home/user/git/test/venv/lib/python3.12/site-packages/starlette/routing.py", line 73, in app
    response = await f(request)
               ^^^^^^^^^^^^^^^^
  File "/home/user/git/test/venv/lib/python3.12/site-packages/fastapi/routing.py", line 301, in app
    raw_response = await run_endpoint_function(
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/git/test/venv/lib/python3.12/site-packages/fastapi/routing.py", line 212, in run_endpoint_function
    return await dependant.call(**values)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/git/test/venv/lib64/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 313, in create_chat_completion
    generator = await chat(raw_request).create_chat_completion(
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/git/test/venv/lib64/python3.12/site-packages/vllm/entrypoints/openai/serving_chat.py", line 254, in create_chat_completion
    return await self.chat_completion_full_generator(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/git/test/venv/lib64/python3.12/site-packages/vllm/entrypoints/openai/serving_chat.py", line 686, in chat_completion_full_generator
    tool_parser = self.tool_parser(tokenizer)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/git/test/venv/lib64/python3.12/site-packages/vllm/entrypoints/openai/tool_parsers/hermes_tool_parser.py", line 51, in __init__
    self.tool_call_start_token_id: int = self.model_tokenizer.vocab[
                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^
KeyError: '<tool_call>'
INFO 09-27 14:05:43 logger.py:36] Received request chat-4e694450def2491abf17d93a02b0f83e: prompt: '<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nEnvironment: ipython\nCutting Knowledge Date: December 2023\nToday Date: 27 Sep 2024\n\n<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nGiven the following functions, please respond with a JSON for a function call with its proper arguments that best answers the given prompt.\n\nRespond in the format {"name": function name, "parameters": dictionary of argument name and its value}.Do not use variables.\n\n{\n    "type": "function",\n    "function": {\n        "name": "tavily_search_results_json",\n        "description": "A search engine optimized for comprehensive, accurate, and trusted results. Useful for when you need to answer questions about current events. Input should be a search query.",\n        "parameters": {\n            "properties": {\n                "query": {\n                    "description": "search query to look up",\n                    "type": "string"\n                }\n            },\n            "required": [\n                "query"\n            ],\n            "type": "object"\n        }\n    }\n}\n\nTell me about langgraph<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n', params: SamplingParams(n=1, best_of=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.7, top_p=1.0, top_k=-1, min_p=0.0, seed=None, use_beam_search=False, length_penalty=1.0, early_stopping=False, stop=[], stop_token_ids=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=130854, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None), prompt_token_ids: [128000, 128006, 9125, 128007, 271, 13013, 25, 6125, 27993, 198, 38766, 1303, 33025, 2696, 25, 6790, 220, 2366, 18, 198, 15724, 2696, 25, 220, 1544, 17907, 220, 2366, 19, 271, 128009, 128006, 882, 128007, 271, 22818, 279, 2768, 5865, 11, 4587, 6013, 449, 264, 4823, 369, 264, 734, 1650, 449, 1202, 6300, 6105, 430, 1888, 11503, 279, 2728, 10137, 382, 66454, 304, 279, 3645, 5324, 609, 794, 734, 836, 11, 330, 14105, 794, 11240, 315, 5811, 836, 323, 1202, 907, 7966, 5519, 539, 1005, 7482, 382, 517, 262, 330, 1337, 794, 330, 1723, 761, 262, 330, 1723, 794, 341, 286, 330, 609, 794, 330, 83, 402, 1570, 10947, 13888, 9643, 761, 286, 330, 4789, 794, 330, 32, 2778, 4817, 34440, 369, 16195, 11, 13687, 11, 323, 22542, 3135, 13, 51612, 369, 994, 499, 1205, 311, 4320, 4860, 922, 1510, 4455, 13, 5688, 1288, 387, 264, 2778, 3319, 10560, 286, 330, 14105, 794, 341, 310, 330, 13495, 794, 341, 394, 330, 1663, 794, 341, 504, 330, 4789, 794, 330, 1874, 3319, 311, 1427, 709, 761, 504, 330, 1337, 794, 330, 928, 702, 394, 457, 310, 1173, 310, 330, 6413, 794, 2330, 394, 330, 1663, 702, 310, 3291, 310, 330, 1337, 794, 330, 1735, 702, 286, 457, 262, 457, 633, 41551, 757, 922, 8859, 4539, 128009, 128006, 78191, 128007, 271], lora_request: None, prompt_adapter_request: None.
INFO 09-27 14:05:43 engine.py:288] Added request chat-4e694450def2491abf17d93a02b0f83e.
DEBUG 09-27 14:05:44 llm_engine.py:1328] Stopping remote worker execution loop.
INFO:     ::1:60304 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/home/user/git/test/venv/lib/python3.12/site-packages/uvicorn/protocols/http/httptools_impl.py", line 401, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/git/test/venv/lib/python3.12/site-packages/uvicorn/middleware/proxy_headers.py", line 70, in __call__
    return await self.app(scope, receive, send)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/git/test/venv/lib/python3.12/site-packages/fastapi/applications.py", line 1054, in __call__
    await super().__call__(scope, receive, send)
  File "/home/user/git/test/venv/lib/python3.12/site-packages/starlette/applications.py", line 113, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/home/user/git/test/venv/lib/python3.12/site-packages/starlette/middleware/errors.py", line 187, in __call__
    raise exc
  File "/home/user/git/test/venv/lib/python3.12/site-packages/starlette/middleware/errors.py", line 165, in __call__
    await self.app(scope, receive, _send)
  File "/home/user/git/test/venv/lib/python3.12/site-packages/starlette/middleware/cors.py", line 85, in __call__
    await self.app(scope, receive, send)
  File "/home/user/git/test/venv/lib/python3.12/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  File "/home/user/git/test/venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 62, in wrapped_app
    raise exc
  File "/home/user/git/test/venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 51, in wrapped_app
    await app(scope, receive, sender)
  File "/home/user/git/test/venv/lib/python3.12/site-packages/starlette/routing.py", line 715, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/home/user/git/test/venv/lib/python3.12/site-packages/starlette/routing.py", line 735, in app
    await route.handle(scope, receive, send)
  File "/home/user/git/test/venv/lib/python3.12/site-packages/starlette/routing.py", line 288, in handle
    await self.app(scope, receive, send)
  File "/home/user/git/test/venv/lib/python3.12/site-packages/starlette/routing.py", line 76, in app
    await wrap_app_handling_exceptions(app, request)(scope, receive, send)
  File "/home/user/git/test/venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 62, in wrapped_app
    raise exc
  File "/home/user/git/test/venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 51, in wrapped_app
    await app(scope, receive, sender)
  File "/home/user/git/test/venv/lib/python3.12/site-packages/starlette/routing.py", line 73, in app
    response = await f(request)
               ^^^^^^^^^^^^^^^^
  File "/home/user/git/test/venv/lib/python3.12/site-packages/fastapi/routing.py", line 301, in app
    raw_response = await run_endpoint_function(
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/git/test/venv/lib/python3.12/site-packages/fastapi/routing.py", line 212, in run_endpoint_function
    return await dependant.call(**values)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/git/test/venv/lib64/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 313, in create_chat_completion
    generator = await chat(raw_request).create_chat_completion(
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/git/test/venv/lib64/python3.12/site-packages/vllm/entrypoints/openai/serving_chat.py", line 254, in create_chat_completion
    return await self.chat_completion_full_generator(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/git/test/venv/lib64/python3.12/site-packages/vllm/entrypoints/openai/serving_chat.py", line 686, in chat_completion_full_generator
    tool_parser = self.tool_parser(tokenizer)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/git/test/venv/lib64/python3.12/site-packages/vllm/entrypoints/openai/tool_parsers/hermes_tool_parser.py", line 51, in __init__
    self.tool_call_start_token_id: int = self.model_tokenizer.vocab[
                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^
KeyError: '<tool_call>'
DEBUG 09-27 14:05:44 client.py:148] Heartbeat successful.
INFO 09-27 14:05:45 logger.py:36] Received request chat-d7b6885bb0a14fa3b3da22dbbd580234: prompt: '<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nEnvironment: ipython\nCutting Knowledge Date: December 2023\nToday Date: 27 Sep 2024\n\n<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nGiven the following functions, please respond with a JSON for a function call with its proper arguments that best answers the given prompt.\n\nRespond in the format {"name": function name, "parameters": dictionary of argument name and its value}.Do not use variables.\n\n{\n    "type": "function",\n    "function": {\n        "name": "tavily_search_results_json",\n        "description": "A search engine optimized for comprehensive, accurate, and trusted results. Useful for when you need to answer questions about current events. Input should be a search query.",\n        "parameters": {\n            "properties": {\n                "query": {\n                    "description": "search query to look up",\n                    "type": "string"\n                }\n            },\n            "required": [\n                "query"\n            ],\n            "type": "object"\n        }\n    }\n}\n\nTell me about langgraph<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n', params: SamplingParams(n=1, best_of=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.7, top_p=1.0, top_k=-1, min_p=0.0, seed=None, use_beam_search=False, length_penalty=1.0, early_stopping=False, stop=[], stop_token_ids=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=130854, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None), prompt_token_ids: [128000, 128006, 9125, 128007, 271, 13013, 25, 6125, 27993, 198, 38766, 1303, 33025, 2696, 25, 6790, 220, 2366, 18, 198, 15724, 2696, 25, 220, 1544, 17907, 220, 2366, 19, 271, 128009, 128006, 882, 128007, 271, 22818, 279, 2768, 5865, 11, 4587, 6013, 449, 264, 4823, 369, 264, 734, 1650, 449, 1202, 6300, 6105, 430, 1888, 11503, 279, 2728, 10137, 382, 66454, 304, 279, 3645, 5324, 609, 794, 734, 836, 11, 330, 14105, 794, 11240, 315, 5811, 836, 323, 1202, 907, 7966, 5519, 539, 1005, 7482, 382, 517, 262, 330, 1337, 794, 330, 1723, 761, 262, 330, 1723, 794, 341, 286, 330, 609, 794, 330, 83, 402, 1570, 10947, 13888, 9643, 761, 286, 330, 4789, 794, 330, 32, 2778, 4817, 34440, 369, 16195, 11, 13687, 11, 323, 22542, 3135, 13, 51612, 369, 994, 499, 1205, 311, 4320, 4860, 922, 1510, 4455, 13, 5688, 1288, 387, 264, 2778, 3319, 10560, 286, 330, 14105, 794, 341, 310, 330, 13495, 794, 341, 394, 330, 1663, 794, 341, 504, 330, 4789, 794, 330, 1874, 3319, 311, 1427, 709, 761, 504, 330, 1337, 794, 330, 928, 702, 394, 457, 310, 1173, 310, 330, 6413, 794, 2330, 394, 330, 1663, 702, 310, 3291, 310, 330, 1337, 794, 330, 1735, 702, 286, 457, 262, 457, 633, 41551, 757, 922, 8859, 4539, 128009, 128006, 78191, 128007, 271], lora_request: None, prompt_adapter_request: None.
INFO 09-27 14:05:45 engine.py:288] Added request chat-d7b6885bb0a14fa3b3da22dbbd580234.
INFO 09-27 14:05:45 metrics.py:351] Avg prompt throughput: 130.2 tokens/s, Avg generation throughput: 9.0 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 0.2%, CPU KV cache usage: 0.0%.
DEBUG 09-27 14:05:45 llm_engine.py:1328] Stopping remote worker execution loop.
INFO:     ::1:60316 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/home/user/git/test/venv/lib/python3.12/site-packages/uvicorn/protocols/http/httptools_impl.py", line 401, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/git/test/venv/lib/python3.12/site-packages/uvicorn/middleware/proxy_headers.py", line 70, in __call__
    return await self.app(scope, receive, send)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/git/test/venv/lib/python3.12/site-packages/fastapi/applications.py", line 1054, in __call__
    await super().__call__(scope, receive, send)
  File "/home/user/git/test/venv/lib/python3.12/site-packages/starlette/applications.py", line 113, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/home/user/git/test/venv/lib/python3.12/site-packages/starlette/middleware/errors.py", line 187, in __call__
    raise exc
  File "/home/user/git/test/venv/lib/python3.12/site-packages/starlette/middleware/errors.py", line 165, in __call__
    await self.app(scope, receive, _send)
  File "/home/user/git/test/venv/lib/python3.12/site-packages/starlette/middleware/cors.py", line 85, in __call__
    await self.app(scope, receive, send)
  File "/home/user/git/test/venv/lib/python3.12/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  File "/home/user/git/test/venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 62, in wrapped_app
    raise exc
  File "/home/user/git/test/venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 51, in wrapped_app
    await app(scope, receive, sender)
  File "/home/user/git/test/venv/lib/python3.12/site-packages/starlette/routing.py", line 715, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/home/user/git/test/venv/lib/python3.12/site-packages/starlette/routing.py", line 735, in app
    await route.handle(scope, receive, send)
  File "/home/user/git/test/venv/lib/python3.12/site-packages/starlette/routing.py", line 288, in handle
    await self.app(scope, receive, send)
  File "/home/user/git/test/venv/lib/python3.12/site-packages/starlette/routing.py", line 76, in app
    await wrap_app_handling_exceptions(app, request)(scope, receive, send)
  File "/home/user/git/test/venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 62, in wrapped_app
    raise exc
  File "/home/user/git/test/venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 51, in wrapped_app
    await app(scope, receive, sender)
  File "/home/user/git/test/venv/lib/python3.12/site-packages/starlette/routing.py", line 73, in app
    response = await f(request)
               ^^^^^^^^^^^^^^^^
  File "/home/user/git/test/venv/lib/python3.12/site-packages/fastapi/routing.py", line 301, in app
    raw_response = await run_endpoint_function(
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/git/test/venv/lib/python3.12/site-packages/fastapi/routing.py", line 212, in run_endpoint_function
    return await dependant.call(**values)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/git/test/venv/lib64/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 313, in create_chat_completion
    generator = await chat(raw_request).create_chat_completion(
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/git/test/venv/lib64/python3.12/site-packages/vllm/entrypoints/openai/serving_chat.py", line 254, in create_chat_completion
    return await self.chat_completion_full_generator(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/git/test/venv/lib64/python3.12/site-packages/vllm/entrypoints/openai/serving_chat.py", line 686, in chat_completion_full_generator
    tool_parser = self.tool_parser(tokenizer)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/git/test/venv/lib64/python3.12/site-packages/vllm/entrypoints/openai/tool_parsers/hermes_tool_parser.py", line 51, in __init__
    self.tool_call_start_token_id: int = self.model_tokenizer.vocab[
                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^
KeyError: '<tool_call>'
DEBUG 09-27 14:05:46 client.py:148] Heartbeat successful.
DEBUG 09-27 14:05:48 client.py:148] Heartbeat successful.
DEBUG 09-27 14:05:50 client.py:148] Heartbeat successful.
DEBUG 09-27 14:05:52 client.py:148] Heartbeat successful.
^CDEBUG 09-27 14:05:53 engine.py:159] Shutting down MQLLMEngine.
DEBUG 09-27 14:05:53 engine.py:161] MQLLMEngine is shut down.
DEBUG 09-27 14:05:53 engine.py:314] Exiting MQLLMEngine heartbeat thread
DEBUG 09-27 14:05:53 launcher.py:54] port 8000 is used by process psutil.Process(pid=187762, name='pt_main_thread', status='running', started='14:05:05') launched with command:
DEBUG 09-27 14:05:53 launcher.py:54] /home/user/git/test/venv/bin/python /home/user/git/test/venv/bin/vllm serve ../Llama-3.2-3B-Instruct-quantized.w8a8 --served-model-name Llama-3.2 --enable-auto-tool-choice --tool-call-parser hermes --gpu-memory-utilization 0.8
INFO 09-27 14:05:53 launcher.py:57] Shutting down FastAPI HTTP server.
INFO:     Shutting down
DEBUG 09-27 14:05:53 client.py:151] Shutting down MQLLMEngineClient check health loop.
DEBUG 09-27 14:05:53 client.py:218] Shutting down MQLLMEngineClient output handler.
INFO:     Waiting for application shutdown.
INFO:     Application shutdown complete.

Before submitting a new issue...

Xaenalt commented 1 day ago

Same error occurs with --tool-call-parser mistral

Xaenalt commented 1 day ago

Ah, looks like they merged --tool-call-parser llama3_json option 18h ago, that's probably the fix

https://github.com/vllm-project/vllm/pull/8343 for anyone finding this issue