Closed jparismorgan closed 1 year ago
Could you try update the mlc-ai pip package? Right now the assertion should have been fixed. https://github.com/mlc-ai/relax/blob/mlc/python/tvm/dlight/gpu/gemv.py#L129-L130 So I suppose that updating the pip package can resolve this issue.
First I have to modify
fuse_split_rotary_embedding.py
as specified here: https://github.com/mlc-ai/mlc-llm/issues/816#issuecomment-1694558023 - I just replace all instances offloat16
withfloat32
infuse_split_rotary_embedding.py
.
We will also fix this.
Thank you! I had:
(mlc-llm) ~/repo/mlc-llm pip freeze > requirements.txt
annotated-types==0.5.0
anyio==4.0.0rc1
attrs==23.1.0
click==8.1.6
cloudpickle==2.2.1
decorator==5.1.1
fastapi==0.101.0
filelock==3.12.2
h11==0.14.0
idna==3.4
iniconfig==2.0.0
Jinja2==3.1.2
MarkupSafe==2.1.3
ml-dtypes==0.2.0
mlc-ai-nightly==0.12.dev1395
mlc-chat-nightly==0.1.dev347
mpmath==1.3.0
networkx==3.1
numpy==1.25.2
packaging==23.1
pluggy==1.2.0
psutil==5.9.5
pydantic==2.1.1
pydantic_core==2.4.0
pytest==7.4.0
scipy==1.11.1
shortuuid==1.0.11
sniffio==1.3.0
starlette==0.27.0
sympy==1.12
torch==2.0.1
tornado==6.3.2
typing_extensions==4.7.1
uvicorn==0.23.2
Then I upgraded:
pip install --pre --force-reinstall mlc-ai-nightly mlc-chat-nightly -f https://mlc.ai/wheels
annotated-types==0.5.0
anyio==4.0.0rc1
attrs==23.1.0
click==8.1.7
cloudpickle==2.2.1
decorator==5.1.1
fastapi==0.103.0
filelock==3.12.2
h11==0.14.0
idna==3.4
iniconfig==2.0.0
Jinja2==3.1.2
MarkupSafe==2.1.3
ml-dtypes==0.2.0
mlc-ai-nightly==0.12.dev1398
mlc-chat-nightly==0.1.dev389
mpmath==1.3.0
networkx==3.1
numpy==1.26.0b1
packaging==23.1
pluggy==1.2.0
psutil==5.9.5
pydantic==2.3.0
pydantic_core==2.6.3
pytest==7.4.0
scipy==1.11.2
shortuuid==1.0.11
sniffio==1.3.0
starlette==0.27.0
sympy==1.12
torch==2.0.1
tornado==6.3.3
typing_extensions==4.7.1
uvicorn==0.23.2
And I can build okay:
(mlc-llm) ~/repo/mlc-llm python3 -m mlc_llm.build --hf-path meta-llama/Llama-2-7b-chat-hf --target webgpu --quantization q4f32_0
Weights exist at dist/models/Llama-2-7b-chat-hf, skipping download.
Using path "dist/models/Llama-2-7b-chat-hf" for model "Llama-2-7b-chat-hf"
Target configured: webgpu -keys=webgpu,gpu -max_num_threads=256
Load cached module from dist/Llama-2-7b-chat-hf-q4f32_0/mod_cache_before_build.pkl and skip tracing. You can use --use-cache=0 to retrace
[16:33:47] /Users/catalyst/Workspace/mlc-ai-package-self-runner/_work/package/package/tvm/src/target/llvm/codegen_llvm.cc:185: Warning: Set native vector bits to be 128 for wasm32
Finish exporting to dist/Llama-2-7b-chat-hf-q4f32_0/Llama-2-7b-chat-hf-q4f32_0-webgpu.wasm
I will make sure to look at that repo for fixes before future bug reports, sorry for the spam!
Glad that upgrading works. No worries since it is pretty minor :-)
π Bug
When compiling with
python3 -m mlc_llm.build --hf-path meta-llama/Llama-2-7b-chat-hf --target webgpu --quantization q4f32_0
I getassert not (is_reduction ^ is_inner_reduction)
.To Reproduce
Steps to reproduce the behavior:
First I have to modify
fuse_split_rotary_embedding.py
as specified here: https://github.com/mlc-ai/mlc-llm/issues/816#issuecomment-1694558023 - I just replace all instances offloat16
withfloat32
infuse_split_rotary_embedding.py
.I then try to compile llama2:
Expected behavior
I can compile and then run.
Environment
conda
, source): sourcepip
, source): pippython -c "import tvm; print('\n'.join(f'{k}: {v}' for k, v in tvm.support.libinfo().items()))"
, applicable if you compile models): USE_GTEST: AUTO SUMMARIZE: OFF USE_IOS_RPC: OFF USE_ETHOSU: CUDA_VERSION: NOT-FOUND USE_LIBBACKTRACE: AUTO DLPACK_PATH: 3rdparty/dlpack/include USE_TENSORRT_CODEGEN: OFF USE_THRUST: OFF USE_TARGET_ONNX: OFF USE_AOT_EXECUTOR: ON BUILD_DUMMY_LIBTVM: OFF USE_CUDNN: OFF USE_TENSORRT_RUNTIME: OFF USE_ARM_COMPUTE_LIB_GRAPH_EXECUTOR: OFF USE_CCACHE: AUTO USE_ARM_COMPUTE_LIB: OFF USE_CPP_RTVM: USE_OPENCL_GTEST: /path/to/opencl/gtest USE_MKL: OFF USE_PT_TVMDSOOP: OFF USE_CLML: OFF USE_STACKVM_RUNTIME: OFF USE_GRAPH_EXECUTOR_CUDA_GRAPH: OFF ROCM_PATH: /opt/rocm USE_DNNL: OFF USE_VITIS_AI: OFF USE_LLVM: llvm-config --link-static USE_VERILATOR: OFF USE_TF_TVMDSOOP: OFF USE_THREADS: ON USE_MSVC_MT: OFF BACKTRACE_ON_SEGFAULT: OFF USE_GRAPH_EXECUTOR: ON USE_ROCBLAS: OFF GIT_COMMIT_HASH: 2b204c39b53912814edc3f07e88919a5c76d00cf USE_VULKAN: OFF USE_RUST_EXT: OFF USE_CUTLASS: OFF USE_CPP_RPC: OFF USE_HEXAGON: OFF USE_CUSTOM_LOGGING: OFF USE_UMA: OFF USE_FALLBACK_STL_MAP: OFF USE_SORT: ON USE_RTTI: ON GIT_COMMIT_TIME: 2023-08-08 17:21:25 -0400 USE_HEXAGON_SDK: /path/to/sdk USE_BLAS: none USE_ETHOSN: OFF USE_LIBTORCH: OFF USE_RANDOM: ON USE_CUDA: OFF USE_COREML: OFF USE_AMX: OFF BUILD_STATIC_RUNTIME: OFF USE_CMSISNN: OFF USE_KHRONOS_SPIRV: OFF USE_CLML_GRAPH_EXECUTOR: OFF USE_TFLITE: OFF USE_HEXAGON_GTEST: /path/to/hexagon/gtest PICOJSON_PATH: 3rdparty/picojson USE_OPENCL_ENABLE_HOST_PTR: OFF INSTALL_DEV: OFF USE_PROFILER: ON USE_NNPACK: OFF LLVM_VERSION: 15.0.7 USE_OPENCL: OFF COMPILER_RT_PATH: 3rdparty/compiler-rt RANG_PATH: 3rdparty/rang/include USE_SPIRV_KHR_INTEGER_DOT_PRODUCT: OFF USE_OPENMP: OFF USE_BNNS: OFF USE_CUBLAS: OFF USE_METAL: ON USE_MICRO_STANDALONE_RUNTIME: OFF USE_HEXAGON_EXTERNAL_LIBS: OFF USE_ALTERNATIVE_LINKER: AUTO USE_BYODT_POSIT: OFF USE_HEXAGON_RPC: OFF USE_MICRO: OFF DMLC_PATH: 3rdparty/dmlc-core/include INDEX_DEFAULT_I64: ON USE_RELAY_DEBUG: OFF USE_RPC: ON USE_TENSORFLOW_PATH: none TVM_CLML_VERSION: USE_MIOPEN: OFF USE_ROCM: OFF USE_PAPI: OFF USE_CURAND: OFF TVM_CXX_COMPILER_PATH: /Library/Developer/CommandLineTools/usr/bin/c++ HIDE_PRIVATE_SYMBOLS: ONAdditional context
Thank you!