mlc-ai / mlc-llm

Universal LLM Deployment Engine with ML Compilation
https://llm.mlc.ai/
Apache License 2.0
19.08k stars 1.56k forks source link

[Bug] Compiling Llama 2 with WebGPU `q4f32_0` I get `assert not (is_reduction ^ is_inner_reduction)` #839

Closed jparismorgan closed 1 year ago

jparismorgan commented 1 year ago

πŸ› Bug

When compiling with python3 -m mlc_llm.build --hf-path meta-llama/Llama-2-7b-chat-hf --target webgpu --quantization q4f32_0 I get assert not (is_reduction ^ is_inner_reduction).

To Reproduce

Steps to reproduce the behavior:

  1. First I have to modify fuse_split_rotary_embedding.py as specified here: https://github.com/mlc-ai/mlc-llm/issues/816#issuecomment-1694558023 - I just replace all instances of float16 with float32 in fuse_split_rotary_embedding.py.

  2. I then try to compile llama2:

    (mlc-llm) ~/repo/mlc-llm python3 -m mlc_llm.build --hf-path meta-llama/Llama-2-7b-chat-hf --target webgpu --quantization q4f32_0
    Weights exist at dist/models/Llama-2-7b-chat-hf, skipping download.
    Using path "dist/models/Llama-2-7b-chat-hf" for model "Llama-2-7b-chat-hf"
    Target configured: webgpu -keys=webgpu,gpu -max_num_threads=256
    Load cached module from dist/Llama-2-7b-chat-hf-q4f32_0/mod_cache_before_build.pkl and skip tracing. You can use --use-cache=0 to retrace
    Traceback (most recent call last):
    File "<frozen runpy>", line 198, in _run_module_as_main
    File "<frozen runpy>", line 88, in _run_code
    File "/Users/parismorgan/repo/mlc-llm/mlc_llm/build.py", line 13, in <module>
    main()
    File "/Users/parismorgan/repo/mlc-llm/mlc_llm/build.py", line 10, in main
    core.build_model_from_args(parsed_args)
    File "/Users/parismorgan/repo/mlc-llm/mlc_llm/core.py", line 596, in build_model_from_args
    build(mod, args)
    File "/Users/parismorgan/repo/mlc-llm/mlc_llm/core.py", line 496, in build
    mod_deploy = dl.ApplyDefaultSchedule(  # pylint: disable=not-callable
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/Users/parismorgan/virtualenvs/mlc-llm/lib/python3.11/site-packages/tvm/ir/transform.py", line 238, in __call__
    return _ffi_transform_api.RunPass(self, mod)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "tvm/_ffi/_cython/./packed_func.pxi", line 331, in tvm._ffi._cy3.core.PackedFuncBase.__call__
    File "tvm/_ffi/_cython/./packed_func.pxi", line 262, in tvm._ffi._cy3.core.FuncCall
    File "tvm/_ffi/_cython/./packed_func.pxi", line 251, in tvm._ffi._cy3.core.FuncCall3
    File "tvm/_ffi/_cython/./base.pxi", line 181, in tvm._ffi._cy3.core.CHECK_CALL
    tvm._ffi.base.TVMError: Traceback (most recent call last):
    File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
    File "/Users/parismorgan/virtualenvs/mlc-llm/lib/python3.11/site-packages/tvm/ir/transform.py", line 307, in _pass_func
    return inst.transform_module(mod, ctx)
    File "/Users/parismorgan/virtualenvs/mlc-llm/lib/python3.11/site-packages/tvm/dlight/base/transform.py", line 64, in transform_module
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    sch = _apply_rules(func, target, self.rules, tunable=False)
    File "/Users/parismorgan/virtualenvs/mlc-llm/lib/python3.11/site-packages/tvm/dlight/base/transform.py", line 80, in _apply_rules
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    space = rule.apply(func, target, tunable)
    File "/Users/parismorgan/virtualenvs/mlc-llm/lib/python3.11/site-packages/tvm/dlight/gpu/gemv.py", line 185, in apply
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    is_inner_reduction = normalize(sch, block_info)
    File "/Users/parismorgan/virtualenvs/mlc-llm/lib/python3.11/site-packages/tvm/dlight/gpu/gemv.py", line 129, in normalize
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^
    assert not (is_reduction ^ is_inner_reduction)
    TVMError: AssertionError

Expected behavior

I can compile and then run.

Environment

Additional context

Thank you!

MasterJH5574 commented 1 year ago

Could you try update the mlc-ai pip package? Right now the assertion should have been fixed. https://github.com/mlc-ai/relax/blob/mlc/python/tvm/dlight/gpu/gemv.py#L129-L130 So I suppose that updating the pip package can resolve this issue.

MasterJH5574 commented 1 year ago

First I have to modify fuse_split_rotary_embedding.py as specified here: https://github.com/mlc-ai/mlc-llm/issues/816#issuecomment-1694558023 - I just replace all instances of float16 with float32 in fuse_split_rotary_embedding.py.

We will also fix this.

jparismorgan commented 1 year ago

Thank you! I had:

(mlc-llm) ~/repo/mlc-llm pip freeze > requirements.txt

annotated-types==0.5.0
anyio==4.0.0rc1
attrs==23.1.0
click==8.1.6
cloudpickle==2.2.1
decorator==5.1.1
fastapi==0.101.0
filelock==3.12.2
h11==0.14.0
idna==3.4
iniconfig==2.0.0
Jinja2==3.1.2
MarkupSafe==2.1.3
ml-dtypes==0.2.0
mlc-ai-nightly==0.12.dev1395
mlc-chat-nightly==0.1.dev347
mpmath==1.3.0
networkx==3.1
numpy==1.25.2
packaging==23.1
pluggy==1.2.0
psutil==5.9.5
pydantic==2.1.1
pydantic_core==2.4.0
pytest==7.4.0
scipy==1.11.1
shortuuid==1.0.11
sniffio==1.3.0
starlette==0.27.0
sympy==1.12
torch==2.0.1
tornado==6.3.2
typing_extensions==4.7.1
uvicorn==0.23.2

Then I upgraded:

pip install --pre --force-reinstall mlc-ai-nightly mlc-chat-nightly -f https://mlc.ai/wheels

annotated-types==0.5.0
anyio==4.0.0rc1
attrs==23.1.0
click==8.1.7
cloudpickle==2.2.1
decorator==5.1.1
fastapi==0.103.0
filelock==3.12.2
h11==0.14.0
idna==3.4
iniconfig==2.0.0
Jinja2==3.1.2
MarkupSafe==2.1.3
ml-dtypes==0.2.0
mlc-ai-nightly==0.12.dev1398
mlc-chat-nightly==0.1.dev389
mpmath==1.3.0
networkx==3.1
numpy==1.26.0b1
packaging==23.1
pluggy==1.2.0
psutil==5.9.5
pydantic==2.3.0
pydantic_core==2.6.3
pytest==7.4.0
scipy==1.11.2
shortuuid==1.0.11
sniffio==1.3.0
starlette==0.27.0
sympy==1.12
torch==2.0.1
tornado==6.3.3
typing_extensions==4.7.1
uvicorn==0.23.2

And I can build okay:

(mlc-llm) ~/repo/mlc-llm python3 -m mlc_llm.build --hf-path meta-llama/Llama-2-7b-chat-hf --target webgpu --quantization q4f32_0 
Weights exist at dist/models/Llama-2-7b-chat-hf, skipping download.
Using path "dist/models/Llama-2-7b-chat-hf" for model "Llama-2-7b-chat-hf"
Target configured: webgpu -keys=webgpu,gpu -max_num_threads=256
Load cached module from dist/Llama-2-7b-chat-hf-q4f32_0/mod_cache_before_build.pkl and skip tracing. You can use --use-cache=0 to retrace
[16:33:47] /Users/catalyst/Workspace/mlc-ai-package-self-runner/_work/package/package/tvm/src/target/llvm/codegen_llvm.cc:185: Warning: Set native vector bits to be 128 for wasm32
Finish exporting to dist/Llama-2-7b-chat-hf-q4f32_0/Llama-2-7b-chat-hf-q4f32_0-webgpu.wasm

I will make sure to look at that repo for fixes before future bug reports, sorry for the spam!

MasterJH5574 commented 1 year ago

Glad that upgrading works. No worries since it is pretty minor :-)