mlc-ai / mlc-llm

Universal LLM Deployment Engine with ML Compilation
https://llm.mlc.ai/
Apache License 2.0
19.2k stars 1.58k forks source link

[Bug] The output of speculative decoding is inconsistent with the output of a single model #2167

Closed DearFishi closed 6 months ago

DearFishi commented 7 months ago

🐛 Bug

The output of speculative decoding is inconsistent with the output of a single model

Speculative decoding for Llama-2-7b-chat-hf-q0f32, the ssm is Llama-2-7b-chat-hf-q4f16: Prompt 0: What is the meaning of life? Output 0:What is the purpose of life? What is the meaning of existence? These are some of the most fundamental questions that have puzzled philosophers, the

Single Llama-2-7b-chat-hf-q0f32: Prompt 0: What is the meaning of life? Output 0:What is the purpose of life? What is the meaning of existence? These are questions that have puzzled philosophers, theologians, scientists

To Reproduce

Steps to reproduce the behavior: Use the script from https://github.com/mlc-ai/mlc-llm/blob/main/tests/python/serve/test_serve_engine_spec.py

# Create engine
model = "ckpt/mlc-llm-weight/Llama-2-7b-chat-hf-q0f32-MLC"
model_lib_path = "ckpt/mlc-llm-libs/Llama-2-7b-chat-hf-q0f32-cuda.so"
small_model = "ckpt/mlc-llm-weight/Llama-2-7b-chat-hf-q4f16_1-MLC"
small_model_lib_path = (
    "ckpt/mlc-llm-libs/Llama-2-7b-chat-hf-q4f16_1-cuda.so"
)
engine = SyncLLMEngine(
    model=model,
    model_lib_path=model_lib_path,
    mode="server",
    max_total_sequence_length=4096,
    additional_models=[small_model + ":" + small_model_lib_path],
    engine_config=EngineConfig(speculative_mode=SpeculativeMode.SMALL_DRAFT, spec_draft_length=4),
)

# model = "ckpt/mlc-llm-weight/Llama-2-7b-chat-hf-q0f32-MLC"
# model_lib_path = "ckpt/mlc-llm-libs/Llama-2-7b-chat-hf-q0f32-cuda.so"
# engine = SyncLLMEngine(
#     model=model,
#     model_lib_path=model_lib_path,
#     mode="server",
#     max_total_sequence_length=4096,
# )

num_requests = 1

# Generate output.
output_texts, _ = engine.generate(
    prompts[:num_requests], GenerationConfig(temperature=0.0, top_p=0, seed=42, max_tokens=30, stop_token_ids=[2], n=1)
)
for req_id, outputs in enumerate(output_texts):
    print(f"Prompt {req_id}: {prompts[req_id]}")
    if len(outputs) == 1:
        print(f"Output {req_id}:{outputs[0]}\n")
    else:
        for i, output in enumerate(outputs):
            print(f"Output {req_id}({i}):{output}\n")

Expected behavior

The output of speculative decoding should be consistent with the output of a single model

Environment

Additional context

if I change the he code on lines 133-134 in https://github.com/mlc-ai/mlc-llm/blob/main/cpp/serve/engine_actions/batch_verify.cc to 20240419-124130 The result will right, like Prompt 0: What is the meaning of life? Output 0:What is the purpose of life? What is the meaning of existence? These are questions that have puzzled philosophers, theologians, scientists, and every

MasterJH5574 commented 7 months ago

Thank you @DearFishi for reporting! It looks like a bug. Would you mind sending a fix for this after confirming the fix can work for longer output?

DearFishi commented 7 months ago

Thank you @DearFishi for reporting! It looks like a bug. Would you mind sending a fix for this after confirming the fix can work for longer output? Thank for your review, I'll do it.

jpf888 commented 7 months ago

@MasterJH5574
Hello, I would like to ask if there are any plans to support Medusa for speculative decoding on the serve engine in the future

MasterJH5574 commented 6 months ago

Thank you @jpf888 for bringing this up. Yeah Medusa-mode speculative decoding is a longer roadmap, but as of now we do not have plan to work on this very soon. You are more than welcome to contribute to the project :-)

Given the original issue has been resolved, I am gonna close this issue. And you can create a new feature request issue for the Medusa-mode speculative decoding.