Open MMuzzammil1 opened 6 months ago
i can't reproduce this on Intel hardware, but i was hopeful =)
i had to write my own benchmark because the mlc_llm bench
command was removed.
i averaged the results of 5 sequential benchmarks.
i set max_tokens=1000 to try to smooth out results a bit.
hardware: Intel Arc A770 16GB
model tested: https://huggingface.co/RLHFlow/LLaMA3-iterative-DPO-final
quantization: q4f16_1
results with standard compile from nightly wheel (0.1.dev1287):
(completion tokens: 694) Completion tokens/sec = 29.19
(completion tokens: 558) Completion tokens/sec = 28.68
(completion tokens: 462) Completion tokens/sec = 30.96
(completion tokens: 576) Completion tokens/sec = 30.12
(completion tokens: 717) Completion tokens/sec = 29.36
mean tokens = 601.4, mean speed = 29.662
results without tvm.relax.transform.FuseOps() and tvm.relax.transform.FuseTIR():
(completion tokens: 475) Completion tokens/sec = 26.87
(completion tokens: 691) Completion tokens/sec = 27.98
(completion tokens: 568) Completion tokens/sec = 29.80
(completion tokens: 504) Completion tokens/sec = 30.13
(completion tokens: 434) Completion tokens/sec = 30.68
mean tokens = 534.4, mean speed = 29.092
Hi @0xDEADFED5. I created this issue for the "Phi-2" model (https://huggingface.co/microsoft/phi-2). Not sure of the behaviour of the Llama-3.
yes, i know, just adding more data. i'm on cellular as my only internet so I can't test that model.
i bet if you did more benchmarks with more tokens your numbers would stabilize
I'll run the benchmarks to check that. But @0xDEADFED5 isn't the decode speed at least independent of the prompt input?
🐛 Bug
When I compile Phi-2 (https://huggingface.co/microsoft/phi-2) with
tvm.relax.transform.FuseOps()
andtvm.relax.transform.FuseTIR()
transformations commented out (https://github.com/mlc-ai/mlc-llm/blob/main/python/mlc_llm/compiler_pass/pipeline.py#L128), I get better prefill and decode speeds on Cuda.To Reproduce
mlc_llm compile Phi2/phi-2-q4f16_1-MLC/mlc-chat-config.json --device cuda -o Phi2/phi-2-q4f16_1-MLC/phi-2-q4f16_1-cuda.so
With
tvm.relax.transform.FuseOps()
andtvm.relax.transform.FuseTIR()
:Without
tvm.relax.transform.FuseOps()
andtvm.relax.transform.FuseTIR()
:Expected behavior
I believe that the expected behavior should be faster performance when
FusedOps
transformation in performed on the IR.Environment
python -c "import tvm; print('\n'.join(f'{k}: {v}' for k, v in tvm.support.libinfo().items()))"
, applicable if you compile models):