mlc-ai / mlc-llm

Universal LLM Deployment Engine with ML Compilation
https://llm.mlc.ai/
Apache License 2.0
18.8k stars 1.54k forks source link

Phi-2 q4f16_1 runs faster when compiled without `tvm.relax.transform.FuseOps()` and `tvm.relax.transform.FuseTIR()` transformations #2405

Open MMuzzammil1 opened 4 months ago

MMuzzammil1 commented 4 months ago

🐛 Bug

When I compile Phi-2 (https://huggingface.co/microsoft/phi-2) with tvm.relax.transform.FuseOps() and tvm.relax.transform.FuseTIR() transformations commented out (https://github.com/mlc-ai/mlc-llm/blob/main/python/mlc_llm/compiler_pass/pipeline.py#L128), I get better prefill and decode speeds on Cuda.

To Reproduce

With tvm.relax.transform.FuseOps() and tvm.relax.transform.FuseTIR():

Statistics: ----------- prefill -----------
throughput: 283.674 tok/s
total tokens: 12 tok
total time: 0.042 s
------------ decode ------------
throughput: 101.508 tok/s
total tokens: 31 tok
total time: 0.305 s

Without tvm.relax.transform.FuseOps() and tvm.relax.transform.FuseTIR():

Statistics: ----------- prefill -----------
throughput: 291.720 tok/s
total tokens: 12 tok
total time: 0.041 s
------------ decode ------------
throughput: 129.715 tok/s
total tokens: 31 tok
total time: 0.239 s

Expected behavior

I believe that the expected behavior should be faster performance when FusedOps transformation in performed on the IR.

Environment

0xDEADFED5 commented 4 months ago

i can't reproduce this on Intel hardware, but i was hopeful =) i had to write my own benchmark because the mlc_llm bench command was removed. i averaged the results of 5 sequential benchmarks. i set max_tokens=1000 to try to smooth out results a bit. hardware: Intel Arc A770 16GB model tested: https://huggingface.co/RLHFlow/LLaMA3-iterative-DPO-final quantization: q4f16_1

results with standard compile from nightly wheel (0.1.dev1287):

(completion tokens: 694) Completion tokens/sec =  29.19
(completion tokens: 558) Completion tokens/sec =  28.68
(completion tokens: 462) Completion tokens/sec =  30.96
(completion tokens: 576) Completion tokens/sec =  30.12
(completion tokens: 717) Completion tokens/sec =  29.36
mean tokens = 601.4, mean speed = 29.662

results without tvm.relax.transform.FuseOps() and tvm.relax.transform.FuseTIR():

(completion tokens: 475) Completion tokens/sec =  26.87
(completion tokens: 691) Completion tokens/sec =  27.98
(completion tokens: 568) Completion tokens/sec =  29.80
(completion tokens: 504) Completion tokens/sec =  30.13
(completion tokens: 434) Completion tokens/sec =  30.68
mean tokens = 534.4, mean speed = 29.092
MMuzzammil1 commented 4 months ago

Hi @0xDEADFED5. I created this issue for the "Phi-2" model (https://huggingface.co/microsoft/phi-2). Not sure of the behaviour of the Llama-3.

0xDEADFED5 commented 4 months ago

yes, i know, just adding more data. i'm on cellular as my only internet so I can't test that model.

i bet if you did more benchmarks with more tokens your numbers would stabilize

MMuzzammil1 commented 4 months ago

I'll run the benchmarks to check that. But @0xDEADFED5 isn't the decode speed at least independent of the prompt input?