OpenBMB / MiniCPM-V

MiniCPM-Llama3-V 2.5: A GPT-4V Level Multimodal LLM on Your Phone
Apache License 2.0
7.86k stars 547 forks source link

[BUG] 执行mlc_chat指令时总是报错 #253

Closed Single430 closed 3 weeks ago

Single430 commented 3 weeks ago

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

当前行为 | Current Behavior

执行:mlc_chat Traceback (most recent call last): File "/opt/conda/bin/mlc_chat", line 33, in sys.exit(load_entry_point('mlc-chat', 'console_scripts', 'mlc_chat')()) File "/opt/conda/bin/mlc_chat", line 25, in importlib_load_entry_point return next(matches).load() File "/opt/conda/lib/python3.10/importlib/metadata/init.py", line 171, in load module = import_module(match.group('module')) File "/opt/conda/lib/python3.10/importlib/init.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 1050, in _gcd_import File "", line 1027, in _find_and_load File "", line 992, in _find_and_load_unlocked File "", line 241, in _call_with_frames_removed File "", line 1050, in _gcd_import File "", line 1027, in _find_and_load File "", line 1006, in _find_and_load_unlocked File "", line 688, in _load_unlocked File "", line 883, in exec_module File "", line 241, in _call_with_frames_removed File "/home/zbl/mlc-MiniCPM/python/mlc_chat/init.py", line 5, in from .chat_module import ChatConfig, ChatModule, ConvConfig, GenerationConfig File "/home/zbl/mlc-MiniCPM/python/mlc_chat/chatmodule.py", line 20, in from . import base as File "/home/zbl/mlc-MiniCPM/python/mlc_chat/base.py", line 28, in _LIB, _LIB_PATH = _load_mlc_llm_lib() File "/home/zbl/mlc-MiniCPM/python/mlc_chat/base.py", line 23, in _load_mlc_llm_lib return ctypes.CDLL(lib_path[0]), lib_path[0] File "/opt/conda/lib/python3.10/ctypes/init.py", line 374, in init self._handle = _dlopen(self._name, mode) OSError: /home/zbl/mlc-MiniCPM/build/libmlc_llm_module.so: undefined symbol: _ZN3tvm7runtime7NDArray10CreateViewENS0_10ShapeTupleE10DLDataType

期望行为 | Expected Behavior

No response

复现方法 | Steps To Reproduce

No response

运行环境 | Environment

Platform (e.g. CUDA):
Operating system (e.g. Ubuntu):
Device (e.g. RTX 4090, ...)
How you installed MLC-LLM (conda, source):
How you installed TVM-Unity (pip, source): https://github.com/apache/tvm 源码安装
Python version (e.g. 3.10):
GPU driver version (if applicable):
CUDA/cuDNN version (if applicable): 11.8
TVM Unity Hash Tag (python -c "import tvm; print('\n'.join(f'{k}: {v}' for k, v in tvm.support.libinfo().items()))", applicable if you compile models):
`
root@b1b15d01e7b0:/home/zbl/mlc-MiniCPM# python -c "import tvm; print(tvm.file)"
/root/.local/lib/python3.10/site-packages/tvm-0.17.dev141+g418322992-py3.10-linux-x86_64.egg/tvm/init.py
root@b1b15d01e7b0:/home/zbl/mlc-MiniCPM# python -c "import tvm; print(tvm._ffi.base._LIB)"
<CDLL '/root/.local/lib/python3.10/site-packages/tvm-0.17.dev141+g418322992-py3.10-linux-x86_64.egg/tvm/libtvm.so', handle 12c3000 at 0x7f20b647cbb0>

root@b1b15d01e7b0:/home/zbl/mlc-MiniCPM# python -c "import tvm; print('\n'.join(f'{k}: {v}' for k, v in tvm.support.libinfo().items()))"
USE_NVTX: OFF
USE_GTEST: AUTO
SUMMARIZE: OFF
TVM_DEBUG_WITH_ABI_CHANGE: OFF
USE_IOS_RPC: OFF
USE_MSC: OFF
USE_ETHOSU:
CUDA_VERSION: NOT-FOUND
USE_LIBBACKTRACE: AUTO
DLPACK_PATH: 3rdparty/dlpack/include
USE_TENSORRT_CODEGEN: OFF
USE_THRUST: OFF
USE_TARGET_ONNX: OFF
USE_AOT_EXECUTOR: ON
BUILD_DUMMY_LIBTVM: OFF
USE_CUDNN: OFF
USE_TENSORRT_RUNTIME: OFF
USE_ARM_COMPUTE_LIB_GRAPH_EXECUTOR: OFF
USE_CCACHE: AUTO
USE_ARM_COMPUTE_LIB: OFF
USE_CPP_RTVM:
USE_OPENCL_GTEST: /path/to/opencl/gtest
TVM_LOG_BEFORE_THROW: OFF
USE_MKL: OFF
USE_PT_TVMDSOOP: OFF
MLIR_VERSION: NOT-FOUND
USE_CLML: OFF
USE_STACKVM_RUNTIME: OFF
USE_GRAPH_EXECUTOR_CUDA_GRAPH: OFF
ROCM_PATH: /opt/rocm
USE_DNNL: OFF
USE_MSCCL: OFF
USE_VITIS_AI: OFF
USE_MLIR: OFF
USE_RCCL: OFF
USE_LLVM: OFF
USE_VERILATOR: OFF
USE_TF_TVMDSOOP: OFF
USE_THREADS: ON
USE_MSVC_MT: OFF
BACKTRACE_ON_SEGFAULT: OFF
USE_GRAPH_EXECUTOR: ON
USE_NCCL: OFF
USE_ROCBLAS: OFF
GIT_COMMIT_HASH: 4183229922ad33c2006954140bc5ef368d40df21
USE_VULKAN: OFF
USE_RUST_EXT: OFF
USE_CUTLASS: OFF
USE_CPP_RPC: OFF
USE_HEXAGON: OFF
USE_CUSTOM_LOGGING: OFF
USE_UMA: OFF
USE_FALLBACK_STL_MAP: OFF
USE_SORT: ON
USE_RTTI: ON
GIT_COMMIT_TIME: 2024-06-09 08:44:58 -0700
USE_HEXAGON_SDK: /path/to/sdk
USE_BLAS: none
USE_ETHOSN: OFF
USE_LIBTORCH: OFF
USE_RANDOM: ON
USE_CUDA: OFF
USE_COREML: OFF
USE_AMX: OFF
BUILD_STATIC_RUNTIME: OFF
USE_CMSISNN: OFF
USE_KHRONOS_SPIRV: OFF
USE_CLML_GRAPH_EXECUTOR: OFF
USE_TFLITE: OFF
USE_HEXAGON_GTEST: /path/to/hexagon/gtest
PICOJSON_PATH: 3rdparty/picojson
USE_OPENCL_ENABLE_HOST_PTR: OFF
INSTALL_DEV: OFF
USE_PROFILER: ON
USE_NNPACK: OFF
LLVM_VERSION: NOT-FOUND
USE_MRVL: OFF
USE_OPENCL: OFF
COMPILER_RT_PATH: 3rdparty/compiler-rt
RANG_PATH: 3rdparty/rang/include
USE_SPIRV_KHR_INTEGER_DOT_PRODUCT: OFF
USE_OPENMP: OFF
USE_BNNS: OFF
USE_FLASHINFER: OFF
USE_CUBLAS: OFF
USE_METAL: OFF
USE_MICRO_STANDALONE_RUNTIME: OFF
USE_HEXAGON_EXTERNAL_LIBS: OFF
USE_ALTERNATIVE_LINKER: AUTO
USE_BYODT_POSIT: OFF
USE_HEXAGON_RPC: OFF
USE_MICRO: OFF
DMLC_PATH: 3rdparty/dmlc-core/include
INDEX_DEFAULT_I64: ON
USE_RELAY_DEBUG: OFF
USE_RPC: ON
USE_TENSORFLOW_PATH: none
TVM_CLML_VERSION:
USE_MIOPEN: OFF
USE_ROCM: OFF
USE_PAPI: OFF
USE_CURAND: OFF
TVM_CXX_COMPILER_PATH: /usr/bin/c++
HIDE_PRIVATE_SYMBOLS: OFF
`

`

备注 | Anything else?

Any other relevant information: Additional context ` root@b1b15d01e7b0:/home/zbl/mlc-MiniCPM# ldd build/libmlc_llm_module.so linux-vdso.so.1 (0x00007ffd3194c000) libtvm.so => /home/zbl/mlc-MiniCPM/build/tvm/libtvm.so (0x00007faa72c6d000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007faa72c5c000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007faa72c39000) libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007faa72a57000) libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007faa72908000) libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007faa728eb000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007faa726f9000) /lib64/ld-linux-x86-64.so.2 (0x00007faa736ed000)

root@b1b15d01e7b0:/home/zbl/mlc-MiniCPM# ldd build/libmlc_llm.so linux-vdso.so.1 (0x00007ffe97384000) libtvm_runtime.so => /home/zbl/mlc-MiniCPM/build/tvm/libtvm_runtime.so (0x00007fca4d9a8000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fca4d997000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fca4d974000) libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007fca4d792000) libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fca4d643000) libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007fca4d626000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fca4d434000) /lib64/ld-linux-x86-64.so.2 (0x00007fca4e428000)

root@b1b15d01e7b0:/home/zbl/mlc-MiniCPM# nm build/tvm/libtvm.so | grep _ZN3tvm7runtime9BacktraceB5cxx11Ev 00000000001159f0 T _ZN3tvm7runtime9BacktraceB5cxx11Ev

Achazwl commented 3 weeks ago

We are no longer actively maintaining the mlc-MiniCPM repository, but the original repository for mlc-llm is still being updated continuously, so there may be some compatibility issues.

We recommend trying out our llama.cpp version:

  1. Minicpm-Llama-3-V 2.5: llama.cpp/examples/minicpmv/README.md at minicpm-v2.5 · OpenBMB/llama.cpp (github.com)
  2. Minicpm-V 2: llama.cpp/examples/minicpmv at feat-minicpmv · Achazwl/llama.cpp (github.com)
Single430 commented 3 weeks ago

We are no longer actively maintaining the mlc-MiniCPM repository, but the original repository for mlc-llm is still being updated continuously, so there may be some compatibility issues.

We recommend trying out our llama.cpp version:

  1. Minicpm-Llama-3-V 2.5: llama.cpp/examples/minicpmv/README.md at minicpm-v2.5 · OpenBMB/llama.cpp (github.com)

  2. Minicpm-V 2: llama.cpp/examples/minicpmv at feat-minicpmv · Achazwl/llama.cpp (github.com)

I simply thought it was a TVM version issue, so all I needed to do was inform the version information

Achazwl commented 3 weeks ago

Previous we use mlc_ai-0.15.1-cp39-cp39-macosx_13_0_arm64.whl in https://mlc.ai/wheels.

Single430 commented 3 weeks ago

Previous we use

mlc_ai-0.15.1-cp39-cp39-macosx_13_0_arm64.whl

in https://mlc.ai/wheels.

Thank you, but Mac... That's it, thank you

Achazwl commented 3 weeks ago

Previous we use mlc_ai-0.15.1-cp39-cp39-macosx_13_0_arm64.whl in https://mlc.ai/wheels.

Thank you, but Mac... That's it, thank you

There are other platforms in https://mlc.ai/wheels.

Single430 commented 3 weeks ago

Previous we use

mlc_ai-0.15.1-cp39-cp39-macosx_13_0_arm64.whl

in https://mlc.ai/wheels.

Thank you, but Mac... That's it, thank you

There are other platforms in https://mlc.ai/wheels.

Okay, I'll give it a try next.