vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
22.24k stars 3.14k forks source link

[Speculative decoding]: The content generated by speculative decoding is inconsistent with the content generated by the target model #5313

Closed YuCheng-Qi closed 3 weeks ago

YuCheng-Qi commented 4 weeks ago

Your current environment

The output of `python collect_env.py`

Collecting environment information... PyTorch version: 2.1.2+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A

OS: Enterprise Linux Server 7.2 (Paladin) (x86_64) GCC version: (GCC) 9.2.1 20200522 (Alibaba 9.2.1-3 2.17) Clang version: Could not collect CMake version: version 3.29.3 Libc version: glibc-2.30

Python version: 3.8.18 (default, Sep 11 2023, 13:40:15) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-5.10.134-13.al8.x86_64-x86_64-with-glibc2.17 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A800-SXM4-80GB GPU 1: NVIDIA A800-SXM4-80GB

Nvidia driver version: 470.161.03 cuDNN version: Probably one of the following: /usr/local/cuda/targets/x86_64-linux/lib/libcudnn.so.8.9.3 /usr/local/cuda/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.9.3 /usr/local/cuda/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.9.3 /usr/local/cuda/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.9.3 /usr/local/cuda/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.9.3 /usr/local/cuda/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.9.3 /usr/local/cuda/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.9.3 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True

CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 128 On-line CPU(s) list: 0-127 Thread(s) per core: 2 Core(s) per socket: 32 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 106 Model name: Intel(R) Xeon(R) Platinum 8369B CPU @ 2.90GHz Stepping: 6 CPU MHz: 3500.000 CPU max MHz: 3500.0000 CPU min MHz: 800.0000 BogoMIPS: 5800.00 Virtualization: VT-x L1d cache: 48K L1i cache: 32K L2 cache: 1280K L3 cache: 49152K NUMA node0 CPU(s): 0-31,64-95 NUMA node1 CPU(s): 32-63,96-127 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid fsrm md_clear pconfig flush_l1d arch_capabilities

Versions of relevant libraries: [pip3] atorch==1.2.1 [pip3] flake8==6.1.0 [pip3] numpy==1.23.5 [pip3] nvidia-nccl-cu12==2.18.1 [pip3] torch==2.1.2 [pip3] torchaudio==2.1.0+cu121 [pip3] torchpippy==0.1.1+cecc4fc [pip3] torchvision==0.16.0+cu121 [pip3] transformers==4.31.0 [pip3] triton==2.1.0 [conda] atorch 1.2.1 pypi_0 pypi [conda] numpy 1.23.5 pypi_0 pypi [conda] nvidia-nccl-cu12 2.18.1 pypi_0 pypi [conda] torch 2.1.2 pypi_0 pypi [conda] torchaudio 2.1.0+cu121 pypi_0 pypi [conda] torchpippy 0.1.1+cecc4fc pypi_0 pypi [conda] torchvision 0.16.0+cu121 pypi_0 pypi [conda] transformers 4.31.0 pypi_0 pypi [conda] triton 2.1.0 pypi_0 pypi ROCM Version: Could not collect Neuron SDK Version: N/A vLLM Version: 0.4.1 vLLM Build Flags: CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled GPU Topology: Could not collect

๐Ÿ› Describe the bug

from vllm import LLM from vllm import LLM, SamplingParams from tokenization_glm import GLMChineseTokenizer from typing import Dict, List, Optional, Union import numpy as np import torch

sampling_params = SamplingParams(temperature=0.8, top_p=0.95, logprobs=1, ignore_eos=True)

prompts = ["What is Machine Learning๏ผŸ"]

llm = LLM( model="/mnt/nas/faibei/faibing-10B-Chat",

enable_lora=True,

use_v2_block_manager=True,
speculative_model="/mnt/nas/faibei/faibing-1B-Chat",
num_speculative_tokens=5,
enforce_eager=True,
# # load_format='safetensors',
# # enable_prefix_caching=True,
gpu_memory_utilization=0.4,
num_gpu_blocks_override=1000,
swap_space=4,
max_context_len_to_capture=512,

)

import time start = time.time() outputs = llm.generate(prompts=prompts, sampling_params=sampling_params) end = time.time() print("tokens/s:",sum([len(o.outputs[0].token_ids) for o in outputs])/(end-start)) print(f"outputs: {outputs}") for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")

The output is: outputs: [RequestOutput(request_id=0, prompt='What is Machine Learning๏ผŸ', prompt_token_ids=[50002, 26888, 2476, 59109, 60303, 43389, 50007], prompt_logprobs=None, outputs=[CompletionOutput(index=0, text='10 Milano12 212ไธป่ฆไธ“ไธš [appsๅ›ฝๅฎถ ่ฟ™็ฎก็† ๆˆ‘ๅฆ‚ไฝ•an', token_ids=[50006, 30, 95195, 64, 50, 64, 79, 75, 40, 62594, 90, 51, 46, 14, 70, 54], cumulative_logprob=0.0, logprobs=[{50006: Logprob(logprob=0.0, rank=None, decoded_token='')}, {30: Logprob(logprob=0.0, rank=None, decoded_token='10')}, {95195: Logprob(logprob=0.0, rank=None, decoded_token=' Milano')}, {64: Logprob(logprob=0.0, rank=None, decoded_token='12')}, {50: Logprob(logprob=0.0, rank=None, decoded_token=' 2')}, {64: Logprob(logprob=0.0, rank=None, decoded_token='12')}, {79: Logprob(logprob=0.0, rank=None, decoded_token='ไธป่ฆ')}, {75: Logprob(logprob=0.0, rank=None, decoded_token='ไธ“ไธš')}, {40: Logprob(logprob=0.0, rank=None, decoded_token=' [')}, {62594: Logprob(logprob=0.0, rank=None, decoded_token='apps')}, {90: Logprob(logprob=0.0, rank=None, decoded_token='ๅ›ฝๅฎถ')}, {51: Logprob(logprob=0.0, rank=None, decoded_token=' ่ฟ™')}, {46: Logprob(logprob=0.0, rank=None, decoded_token='็ฎก็†')}, {14: Logprob(logprob=0.0, rank=None, decoded_token=' ๆˆ‘')}, {70: Logprob(logprob=0.0, rank=None, decoded_token='ๅฆ‚ไฝ•')}, {54: Logprob(logprob=0.0, rank=None, decoded_token='an')}], finish_reason=length, stop_reason=None)], finished=True, metrics=RequestMetrics(arrival_time=1717665609.5043766, first_scheduled_time=+4.05ms, first_token_time=+42.83ms, last_token_time=+0.00ms, time_in_queue=4.05ms, finished_time=1717665610.0992045), lora_request=None)] Prompt: 'What is Machine Learning๏ผŸ', Generated text: '10 Milano12 212ไธป่ฆไธ“ไธš [appsๅ›ฝๅฎถ ่ฟ™็ฎก็† ๆˆ‘ๅฆ‚ไฝ•an'

but use target model: llm = LLM( model="/mnt/nas/faibei/faibing-10B-Chat",

enable_lora=True,

#use_v2_block_manager=True,
#speculative_model="/mnt/nas/faibei/faibing-1B-Chat",
#num_speculative_tokens=5,
enforce_eager=True,
# # load_format='safetensors',
# # enable_prefix_caching=True,
gpu_memory_utilization=0.4,
num_gpu_blocks_override=1000,
swap_space=4,
max_context_len_to_capture=512,

) the output is๏ผš tokens/s: 42.0836783927287 outputs: [RequestOutput(request_id=0, prompt='What is Machine Learning๏ผŸ', prompt_token_ids=[50002, 26888, 2476, 59109, 60303, 43389, 50007], prompt_logprobs=None, outputs=[CompletionOutput(index=0, text='Machine Learning is a subfield of Artificial Intelligence that involves training algorithms to', token_ids=[50006, 59109, 60303, 2476, 778, 19767, 12338, 17023, 885, 87032, 58795, 5246, 52843, 50599, 55832, 1293], cumulative_logprob=-7.396305903792381, logprobs=[{50006: Logprob(logprob=0.0, rank=1, decoded_token='')}, {59109: Logprob(logprob=0.0, rank=1, decoded_token='Machine')}, {60303: Logprob(logprob=-0.6931471824645996, rank=1, decoded_token=' Learning'), 51081: Logprob(logprob=-0.6931471824645996, rank=1, decoded_token=' learning')}, {2476: Logprob(logprob=0.0, rank=1, decoded_token=' is')}, {778: Logprob(logprob=0.0, rank=1, decoded_token=' a')}, {19767: Logprob(logprob=-1.7465672492980957, rank=3, decoded_token=' su'), 50162: Logprob(logprob=-0.7465673089027405, rank=1, decoded_token=' field')}, {12338: Logprob(logprob=0.0, rank=1, decoded_token='bf')}, {17023: Logprob(logprob=0.0, rank=1, decoded_token='ield')}, {885: Logprob(logprob=0.0, rank=1, decoded_token=' of')}, {87032: Logprob(logprob=-0.3484445810317993, rank=1, decoded_token=' Artificial')}, {58795: Logprob(logprob=0.0, rank=1, decoded_token=' Intelligence')}, {5246: Logprob(logprob=-0.825939416885376, rank=2, decoded_token=' that'), 35: Logprob(logprob=-0.575939416885376, rank=1, decoded_token=' (')}, {52843: Logprob(logprob=-2.449254035949707, rank=3, decoded_token=' involves'), 56318: Logprob(logprob=-0.44925403594970703, rank=1, decoded_token=' focuses')}, {50599: Logprob(logprob=-1.1480169296264648, rank=2, decoded_token=' training'), 631: Logprob(logprob=-0.6480168700218201, rank=1, decoded_token=' the')}, {55832: Logprob(logprob=-0.1849365085363388, rank=1, decoded_token=' algorithms')}, {1293: Logprob(logprob=0.0, rank=1, decoded_token=' to')}], finish_reason=length, stop_reason=None)], finished=True, metrics=RequestMetrics(arrival_time=1717666149.8962908, first_scheduled_time=+3.81ms, first_token_time=+94.62ms, last_token_time=+0.00ms, time_in_queue=3.81ms, finished_time=1717666150.2760644), lora_request=None)] Prompt: 'What is Machine Learning๏ผŸ', Generated text: 'Machine Learning is a subfield of Artificial Intelligence that involves training algorithms to' (base)

Can anyone help me? How can I solve this problem?

sroy745 commented 4 weeks ago

Are you expecting them to be exactly same? I see you are using a temperature of 0.8 in your experiment. At higher temperatures I think you will see differences between the Speculative decoding output and the output generated if you were to use the target model directly. This is the acceptance logic for draft tokens https://sourcegraph.com/github.com/vllm-project/vllm/-/blob/vllm/model_executor/layers/rejection_sampler.py?L160 and it does not guarantee that the outputs will be the same specially for higher temperatures.

You can expect the 2 outputs to match only for temperature 0.

cc: @cadedaniel for his input.

cadedaniel commented 4 weeks ago

@sroy745 is right but also looks like it's generating gibberish which is unexpected unless the target model produces gibberish.

Prompt: 'What is Machine Learning๏ผŸ', Generated text: '10 Milano12 212ไธป่ฆไธ“ไธš [appsๅ›ฝๅฎถ ่ฟ™็ฎก็† ๆˆ‘ๅฆ‚ไฝ•an'

@YuCheng-Qi can you share a reproducible example, e.g. a model I can reproduce it with?

YuCheng-Qi commented 4 weeks ago

@cadedaniel @sroy745 Thank you for your very insightful and enthusiastic answers. Below I will describe the process of this error in detail for your reference.

when I use : sampling_params = SamplingParams(temperature=0, top_p=0.95, logprobs=1, stop_token_ids=stop_token_ids)

1 The result generated by using the target model๏ผˆ/mnt/nas/faibei/faibing-10B-Chat๏ผ‰ alone is๏ผš Processed prompts: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1/1 [00:00<00:00, 3.20it/s] tokens/s: 50.583147408917135 outputs: [RequestOutput(request_id=0, prompt='What is Machine Learning', prompt_token_ids=[50002, 26888, 2476, 59109, 60303, 50007], prompt_logprobs=None, outputs=[CompletionOutput(index=0, text='Machine learning is a field of artificial intelligence (AI) that involves training algorithms', token_ids=[50006, 59109, 51081, 2476, 778, 50162, 885, 55501, 53336, 35, 6272, 43396, 5246, 52843, 50599, 55832], cumulative_logprob=-2.7939926966739677, logprobs=[{50006: Logprob(logprob=0.0, rank=1, decoded_token='')}, {59109: Logprob(logprob=-0.011991554871201515, rank=1, decoded_token='Machine')}, {51081: Logprob(logprob=-0.4860461354255676, rank=1, decoded_token=' learning')}, {2476: Logprob(logprob=-0.011147127486765385, rank=1, decoded_token=' is')}, {778: Logprob(logprob=-0.014388969168066978, rank=1, decoded_token=' a')}, {50162: Logprob(logprob=-0.5061734318733215, rank=1, decoded_token=' field')}, {885: Logprob(logprob=-0.0033579650335013866, rank=1, decoded_token=' of')}, {55501: Logprob(logprob=-0.5290303826332092, rank=1, decoded_token=' artificial')}, {53336: Logprob(logprob=-3.099436753473128e-06, rank=1, decoded_token=' intelligence')}, {35: Logprob(logprob=-0.445012629032135, rank=1, decoded_token=' (')}, {6272: Logprob(logprob=-1.3589766240329482e-05, rank=1, decoded_token='AI')}, {43396: Logprob(logprob=-6.9141146923357155e-06, rank=1, decoded_token=')')}, {5246: Logprob(logprob=-0.021114686504006386, rank=1, decoded_token=' that')}, {52843: Logprob(logprob=-0.1500907987356186, rank=1, decoded_token=' involves')}, {50599: Logprob(logprob=-0.2279604822397232, rank=1, decoded_token=' training')}, {55832: Logprob(logprob=-0.3876549303531647, rank=1, decoded_token=' algorithms')}], finish_reason=length, stop_reason=None)], finished=True, metrics=RequestMetrics(arrival_time=1717727044.651589, first_scheduled_time=+4.50ms, first_token_time=+31.69ms, last_token_time=+0.00ms, time_in_queue=4.50ms, finished_time=1717727044.9674897), lora_request=None)]

Prompt: 'What is Machine Learning', Generated text: 'Machine learning is a field of artificial intelligence (AI) that involves training algorithms'

2 The result generated by using the spec model (small model๏ผŒ/mnt/nas/faibei/faibing-1B-Chat) used in Speculative decoding alone is๏ผš $python sp_runner_example_api.py ldd: ./libnccl.so.2: No such file or directory Processed prompts: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1/1 [00:00<00:00, 5.29it/s] tokens/s: 83.13196912507416 outputs: [RequestOutput(request_id=0, prompt='What is Machine Learning', prompt_token_ids=[50002, 26888, 2476, 59109, 60303, 50007], prompt_logprobs=None, outputs=[CompletionOutput(index=0, text='Machine learning is a subset of artificial intelligence (AI) that involves training algorithms', token_ids=[50006, 59109, 51081, 2476, 778, 55977, 885, 55501, 53336, 35, 6272, 43396, 5246, 52843, 50599, 55832], cumulative_logprob=-2.963461573823224, logprobs=[{50006: Logprob(logprob=0.0, rank=1, decoded_token='')}, {59109: Logprob(logprob=-0.019566968083381653, rank=1, decoded_token='Machine')}, {51081: Logprob(logprob=-0.5231714248657227, rank=1, decoded_token=' learning')}, {2476: Logprob(logprob=-0.06658891588449478, rank=1, decoded_token=' is')}, {778: Logprob(logprob=-0.011581214144825935, rank=1, decoded_token=' a')}, {55977: Logprob(logprob=-0.5235638618469238, rank=1, decoded_token=' subset')}, {885: Logprob(logprob=-0.00020358874462544918, rank=1, decoded_token=' of')}, {55501: Logprob(logprob=-0.1554061472415924, rank=1, decoded_token=' artificial')}, {53336: Logprob(logprob=-1.9192511899746023e-05, rank=1, decoded_token=' intelligence')}, {35: Logprob(logprob=-0.6489261984825134, rank=1, decoded_token=' (')}, {6272: Logprob(logprob=-9.572047565598041e-05, rank=1, decoded_token='AI')}, {43396: Logprob(logprob=-0.008335325866937637, rank=1, decoded_token=')')}, {5246: Logprob(logprob=-0.01461420301347971, rank=1, decoded_token=' that')}, {52843: Logprob(logprob=-0.3702547252178192, rank=1, decoded_token=' involves')}, {50599: Logprob(logprob=-0.4765051603317261, rank=1, decoded_token=' training')}, {55832: Logprob(logprob=-0.14462892711162567, rank=1, decoded_token=' algorithms')}], finish_reason=length, stop_reason=None)], finished=True, metrics=RequestMetrics(arrival_time=1717727261.8583694, first_scheduled_time=+3.60ms, first_token_time=+97.93ms, last_token_time=+0.00ms, time_in_queue=3.60ms, finished_time=1717727262.050356), lora_request=None)] Prompt: 'What is Machine Learning', Generated text: 'Machine learning is a subset of artificial intelligence (AI) that involves training algorithms'

3 However, when I use the target model (/mnt/nas/faibei/faibing-10B-Chat) as the model and the small model (/mnt/nas/faibei/faibing-1B-Chat) as the speculative_model, garbled characters appear in the generated content. The results are as follows:

$python sp_runner_example_api.py ldd: ./libnccl.so.2: No such file or directory Processed prompts: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1/1 [00:00<00:00, 1.62it/s] tokens/s: 25.66761954520385 outputs: [RequestOutput(request_id=0, prompt='What is Machine Learning', prompt_token_ids=[50002, 26888, 2476, 59109, 60303, 50007], prompt_logprobs=None, outputs=[CompletionOutput(index=0, text='โ€”โ€”๏ผ‰๏ผŒilha็”Ÿๆดป 2ๆฏ”่พƒ๏ผ‰๏ผŒ่ฟ˜ๆœ‰ๆฒกๆœ‰ๅ…ฌๅธolinaๅพˆๅคšๆˆ‘็š„ไธ€ไธ‹็š„ไธ€', token_ids=[50006, 44, 65, 102517, 76, 50, 93, 65, 95, 10, 26, 95078, 63, 69, 94, 97], cumulative_logprob=0.0, logprobs=[{50006: Logprob(logprob=0.0, rank=None, decoded_token='')}, {44: Logprob(logprob=0.0, rank=None, decoded_token='โ€”โ€”')}, {65: Logprob(logprob=0.0, rank=None, decoded_token='),')}, {102517: Logprob(logprob=0.0, rank=None, decoded_token='ilha')}, {76: Logprob(logprob=0.0, rank=None, decoded_token='็”Ÿๆดป')}, {50: Logprob(logprob=0.0, rank=None, decoded_token=' 2')}, {93: Logprob(logprob=0.0, rank=None, decoded_token='ๆฏ”่พƒ')}, {65: Logprob(logprob=0.0, rank=None, decoded_token='),')}, {95: Logprob(logprob=0.0, rank=None, decoded_token='่ฟ˜ๆœ‰')}, {10: Logprob(logprob=0.0, rank=None, decoded_token='ๆฒกๆœ‰')}, {26: Logprob(logprob=0.0, rank=None, decoded_token='ๅ…ฌๅธ')}, {95078: Logprob(logprob=0.0, rank=None, decoded_token='olina')}, {63: Logprob(logprob=0.0, rank=None, decoded_token='ๅพˆๅคš')}, {69: Logprob(logprob=0.0, rank=None, decoded_token='ๆˆ‘็š„')}, {94: Logprob(logprob=0.0, rank=None, decoded_token='ไธ€ไธ‹')}, {97: Logprob(logprob=0.0, rank=None, decoded_token='็š„ไธ€')}], finish_reason=length, stop_reason=None)], finished=True, metrics=RequestMetrics(arrival_time=1717727600.3066957, first_scheduled_time=+4.82ms, first_token_time=+112.33ms, last_token_time=+0.00ms, time_in_queue=4.82ms, finished_time=1717727600.929564), lora_request=None)] Prompt: 'What is Machine Learning', Generated text: 'โ€”โ€”๏ผ‰๏ผŒilha็”Ÿๆดป 2ๆฏ”่พƒ๏ผ‰๏ผŒ่ฟ˜ๆœ‰ๆฒกๆœ‰ๅ…ฌๅธolinaๅพˆๅคšๆˆ‘็š„ไธ€ไธ‹็š„ไธ€'

@cadedaniel Sorry, you may not be able to get my model file (it is not open source yet, so you can't download it) I guess it may be that token_ids are not obtained correctly during speculative sampling, or logprob is not calculated correctly, resulting in garbled sampling, or some other reason? Please help analyze the solution.