mlc-ai / mlc-llm

Universal LLM Deployment Engine with ML Compilation
https://llm.mlc.ai/
Apache License 2.0
19.08k stars 1.56k forks source link

[Bug] AttributeError: 'Quantization' object has no attribute 'embedding_table' #508

Closed wangxu569 closed 1 year ago

wangxu569 commented 1 year ago

🐛 Bug

When I use 8-bit quantization, this error occurs; Other quantification methods would not have this problem ## To Reproduce Steps to reproduce the behavior: 1. download vicuna-7b-delta-v1.1 model file from (https://huggingface.co/lmsys/vicuna-7b-delta-v1.1) and save to dist/models/vicuna-7b-delta-v1.1 2. python3 build.py --model vicuna-7b-delta-v1.1 --target cuda --quantization q8f16_0 Using path "dist/models/vicuna-7b-delta-v1.1" for model "vicuna-7b-delta-v1.1" Database paths: ['log_db/redpajama-3b-q4f16', 'log_db/rwkv-raven-3b', 'log_db/dolly-v2-3b', 'log_db/redpajama-3b-q4f32', 'log_db/rwkv-raven-1b5', 'log_db/vicuna-v1-7b', 'log_db/rwkv-raven-7b'] Target configured: cuda -keys=cuda,gpu -arch=sm_86 -max_num_threads=1024 -thread_warp_size=32 Traceback (most recent call last): File "/home/xs11/wangxu/mlc-llm/build.py", line 457, in main() File "/home/xs11/wangxu/mlc-llm/build.py", line 424, in main mod, params = llama.get_model(ARGS, config) File "/home/xs11/wangxu/mlc-llm/mlc_llm/relax_model/llama.py", line 789, in get_model create_encoding_func(bb, param_manager, config, args.quantization, sep_embed) File "/home/xs11/wangxu/mlc-llm/mlc_llm/relax_model/llama.py", line 645, in create_encoding_func param_manager.register_params( File "/home/xs11/wangxu/mlc-llm/mlc_llm/relax_model/param_manager.py", line 138, in register_params getattr(quantization_scheme, quant_kind.name) AttributeError: 'Quantization' object has no attribute 'embedding_table' [11:47:11] /home/xs11/wangxu/tvm-unity/src/relax/ir/block_builder.cc:64: Warning: BlockBuilder destroyed with remaining blocks! ## Expected behavior

Environment

  • Platform (CUDA):
  • Operating system (Ubuntu):
  • Device (PC+RTX 3080Ti)
  • How you installed MLC-LLM (source):
  • How you installed TVM-Unity (source):
  • Python version (3.10):
  • GPU driver version (525.116.04):
  • CUDA version (11.8):
  • TVM Unity Hash Tag (USE_GTEST: AUTO SUMMARIZE: OFF USE_IOS_RPC: OFF USE_ETHOSU: OFF CUDA_VERSION: 11.8 USE_LIBBACKTRACE: AUTO DLPACK_PATH: 3rdparty/dlpack/include USE_TENSORRT_CODEGEN: OFF USE_THRUST: OFF USE_TARGET_ONNX: OFF USE_AOT_EXECUTOR: ON BUILD_DUMMY_LIBTVM: OFF USE_CUDNN: OFF USE_TENSORRT_RUNTIME: OFF USE_ARM_COMPUTE_LIB_GRAPH_EXECUTOR: OFF USE_CCACHE: AUTO USE_ARM_COMPUTE_LIB: OFF USE_CPP_RTVM: OFF USE_OPENCL_GTEST: /path/to/opencl/gtest USE_MKL: OFF USE_PT_TVMDSOOP: OFF USE_CLML: OFF USE_STACKVM_RUNTIME: OFF USE_GRAPH_EXECUTOR_CUDA_GRAPH: OFF ROCM_PATH: /opt/rocm USE_DNNL: OFF USE_VITIS_AI: OFF USE_LLVM: llvm-config --ignore-libllvm --link-static USE_VERILATOR: OFF USE_TF_TVMDSOOP: OFF USE_THREADS: ON USE_MSVC_MT: OFF BACKTRACE_ON_SEGFAULT: OFF USE_GRAPH_EXECUTOR: ON USE_ROCBLAS: OFF GIT_COMMIT_HASH: NOT-FOUND USE_VULKAN: OFF USE_RUST_EXT: OFF USE_CUTLASS: OFF USE_CPP_RPC: OFF USE_HEXAGON: OFF USE_CUSTOM_LOGGING: OFF USE_UMA: OFF USE_FALLBACK_STL_MAP: OFF USE_SORT: ON USE_RTTI: ON GIT_COMMIT_TIME: NOT-FOUND USE_HEXAGON_SDK: /path/to/sdk USE_BLAS: none USE_ETHOSN: OFF USE_LIBTORCH: OFF USE_RANDOM: ON USE_CUDA: ON USE_COREML: OFF USE_AMX: OFF BUILD_STATIC_RUNTIME: OFF USE_CMSISNN: OFF USE_KHRONOS_SPIRV: OFF USE_CLML_GRAPH_EXECUTOR: OFF USE_TFLITE: OFF USE_HEXAGON_GTEST: /path/to/hexagon/gtest PICOJSON_PATH: 3rdparty/picojson USE_OPENCL_ENABLE_HOST_PTR: OFF INSTALL_DEV: OFF USE_PROFILER: ON USE_NNPACK: OFF LLVM_VERSION: 16.0.6 USE_OPENCL: OFF COMPILER_RT_PATH: 3rdparty/compiler-rt RANG_PATH: 3rdparty/rang/include USE_SPIRV_KHR_INTEGER_DOT_PRODUCT: OFF USE_OPENMP: none USE_BNNS: OFF USE_CUBLAS: OFF USE_METAL: OFF USE_MICRO_STANDALONE_RUNTIME: OFF USE_HEXAGON_EXTERNAL_LIBS: OFF USE_ALTERNATIVE_LINKER: AUTO USE_BYODT_POSIT: OFF USE_HEXAGON_RPC: OFF USE_MICRO: OFF DMLC_PATH: 3rdparty/dmlc-core/include INDEX_DEFAULT_I64: ON USE_RELAY_DEBUG: OFF USE_RPC: ON USE_TENSORFLOW_PATH: none TVM_CLML_VERSION: USE_MIOPEN: OFF USE_ROCM: OFF USE_PAPI: OFF USE_CURAND: OFF TVM_CXX_COMPILER_PATH: /usr/bin/c++ HIDE_PRIVATE_SYMBOLS: ON):
  • Any other relevant information:

Additional context

yzh119 commented 1 year ago

We have refactored the parameter managers and q8f16 is not supported in the new quantization framework yet(https://github.com/mlc-ai/mlc-llm/blob/d800c783337dc10870da3a3fe0b0517d50ba3ab5/mlc_llm/quantization/__init__.py#L84), cc @MasterJH5574

wangxu569 commented 1 year ago

We have refactored the parameter managers and q8f16 is not supported in the new quantization framework yet(

https://github.com/mlc-ai/mlc-llm/blob/d800c783337dc10870da3a3fe0b0517d50ba3ab5/mlc_llm/quantization/__init__.py#L84

), cc @MasterJH5574

Okay, I see it

MasterJH5574 commented 1 year ago

Hi @wangxu569, after some recent refactoring now you are able to use q8f16_0 for vicuna. https://github.com/mlc-ai/mlc-llm/blob/f121844287a4ba232e8c76e52e8b30aa24f8e08a/mlc_llm/quantization/__init__.py#L85-L89

Nevertheless, we don't recommend this as right now the q8f16_0 is designed for RWKV, and applying it to Vicuna may get suboptimal performance. We would recommend to use q3f16_0 for now for Vicuna and other LLaMA-family models. Likely next week we will enable a new q4f16_1 quantization mode which has even better performance.