Closed pjyi2147 closed 1 week ago
Maybe something similar to this issue? https://github.com/mlc-ai/mlc-llm/issues/2447
Hi @pjyi2147, we can support this model and the issue is not from tokenizer. May I ask what the tokenizers
version is on your end? We need it to be at least 0.19.1 to work well:
> pip list | grep "tokenizers"
tokenizers 0.19.1
So please update your tokenizers package to the latest if you find it older than 0.19.1.
Besides, we indeed fixed a bug (not related to this issue) in #3026 for sliding window. So please update to the latest nightly package tomorrow or check out the latest codebase to address that bug. Thanks.
Hello @MasterJH5574. I just checked my tokenizers
package version is 0.20.1
. I will try the whole process again this weekend to see any changes with the error
@pjyi2147 I see. Then I guess it's not the Python package but the Rust package. Do you build mlc-llm from source? (I assume so?) If true, maybe we need to check the Rust tokenizers package version by
> cd 3rdparty/tokenizers-cpp/rust
> cargo check --package tokenizers
...
Checking tokenizers v0.19.1
Finished dev [unoptimized + debuginfo] target(s) in 6.17s
Because the package requirement is from the Rust side https://github.com/mlc-ai/tokenizers-cpp/blob/main/rust/Cargo.toml#L11
@MasterJH5574
I did not install mlc-llm
from source, but installed via pip. My current version of mlc-llm from pip is the following:
mlc-ai-nightly-cu122 0.18.dev226
mlc-llm-nightly-cu122 0.18.dev61
Is there any more information you would need to investigate further?
@pjyi2147 Thank you for sharing this information. It's very helpful. We'll dig deeper to see what's going on.
Hi @pjyi2147 we have fixed the issue. It turns out that 0.19.3 is also too old to run the rank_zephyr model. We've bumped to 0.20.3 and please update the mlc python package and try again, thanks!
@MasterJH5574 Do you mean 0.20.3
for the version of the tokenizers
package?
@pjyi2147 Yes. We've done that here https://github.com/mlc-ai/tokenizers-cpp/commit/4bb753377680e249345b54c6b10e6d0674c8af03. No other action is needed for your side but upgrading the mlc Python package.
Great! I will run the process again over the weekend and update.
I updated and it works!
My current versions are
mlc-ai-nightly-cu122 0.18.dev246 pypi_0 pypi
mlc-llm-nightly-cu122 0.18.dev69 pypi_0 pypi
π Bug
Hi, I am trying to integrate mlc_llm to my research project and having issues with running the model with mlc_llm.
Are finetuned models not supported yet?
To Reproduce
Steps to reproduce the behavior:
mlc_llm chat <converted_model_dir>
Expected behavior
mlc_llm chat server waiting for prompts
Environment
conda
, source): condapip
, source): pippython -c "import tvm; print('\n'.join(f'{k}: {v}' for k, v in tvm.support.libinfo().items()))"
, applicable if you compile models):USE_NVTX: OFF USE_GTEST: AUTO SUMMARIZE: OFF TVM_DEBUG_WITH_ABI_CHANGE: OFF USE_IOS_RPC: OFF USE_MSC: OFF USE_ETHOSU: CUDA_VERSION: 12.2 USE_LIBBACKTRACE: AUTO DLPACK_PATH: 3rdparty/dlpack/include USE_TENSORRT_CODEGEN: OFF USE_THRUST: ON USE_TARGET_ONNX: OFF USE_AOT_EXECUTOR: ON BUILD_DUMMY_LIBTVM: OFF USE_CUDNN: OFF USE_TENSORRT_RUNTIME: OFF USE_ARM_COMPUTE_LIB_GRAPH_EXECUTOR: OFF USE_CCACHE: AUTO USE_ARM_COMPUTE_LIB: OFF USE_CPP_RTVM: USE_OPENCL_GTEST: /path/to/opencl/gtest TVM_LOG_BEFORE_THROW: OFF USE_MKL: OFF USE_PT_TVMDSOOP: OFF MLIR_VERSION: NOT-FOUND USE_CLML: OFF USE_STACKVM_RUNTIME: OFF USE_GRAPH_EXECUTOR_CUDA_GRAPH: OFF ROCM_PATH: /opt/rocm USE_DNNL: OFF USE_MSCCL: OFF USE_NNAPI_RUNTIME: OFF USE_VITIS_AI: OFF USE_MLIR: OFF USE_RCCL: OFF USE_LLVM: llvm-config --ignore-libllvm --link-static USE_VERILATOR: OFF USE_TF_TVMDSOOP: OFF USE_THREADS: ON USE_MSVC_MT: OFF BACKTRACE_ON_SEGFAULT: OFF USE_GRAPH_EXECUTOR: ON USE_NCCL: ON USE_ROCBLAS: OFF GIT_COMMIT_HASH: 79a69ae4a92c9d4f23e62f93ce5b0d90ed29e5ed USE_VULKAN: ON USE_RUST_EXT: OFF USE_CUTLASS: ON USE_CPP_RPC: OFF USE_HEXAGON: OFF USE_CUSTOM_LOGGING: OFF USE_UMA: OFF USE_FALLBACK_STL_MAP: OFF USE_SORT: ON USE_RTTI: ON GIT_COMMIT_TIME: 2024-11-11 00:56:50 -0500 USE_HIPBLAS: OFF USE_HEXAGON_SDK: /path/to/sdk USE_BLAS: none USE_ETHOSN: OFF USE_LIBTORCH: OFF USE_RANDOM: ON USE_CUDA: ON USE_COREML: OFF USE_AMX: OFF BUILD_STATIC_RUNTIME: OFF USE_CMSISNN: OFF USE_KHRONOS_SPIRV: OFF USE_CLML_GRAPH_EXECUTOR: OFF USE_TFLITE: OFF USE_HEXAGON_GTEST: /path/to/hexagon/gtest PICOJSON_PATH: 3rdparty/picojson USE_OPENCL_ENABLE_HOST_PTR: OFF INSTALL_DEV: OFF USE_PROFILER: ON USE_NNPACK: OFF LLVM_VERSION: 17.0.6 USE_MRVL: OFF USE_OPENCL: OFF COMPILER_RT_PATH: 3rdparty/compiler-rt USE_NNAPI_CODEGEN: OFF RANG_PATH: 3rdparty/rang/include USE_SPIRV_KHR_INTEGER_DOT_PRODUCT: OFF USE_OPENMP: OFF USE_BNNS: OFF USE_FLASHINFER: ON USE_CUBLAS: ON USE_METAL: OFF USE_MICRO_STANDALONE_RUNTIME: OFF USE_HEXAGON_EXTERNAL_LIBS: OFF USE_ALTERNATIVE_LINKER: AUTO USE_BYODT_POSIT: OFF USE_NVSHMEM: OFF USE_HEXAGON_RPC: OFF USE_MICRO: OFF DMLC_PATH: 3rdparty/dmlc-core/include INDEX_DEFAULT_I64: ON USE_RELAY_DEBUG: OFF USE_RPC: ON USE_TENSORFLOW_PATH: none TVM_CLML_VERSION: USE_MIOPEN: OFF USE_ROCM: OFF USE_PAPI: OFF USE_CURAND: OFF TVM_CXX_COMPILER_PATH: /opt/rh/gcc-toolset-11/root/usr/bin/c++ HIDE_PRIVATE_SYMBOLS: ON
Additional context