Closed NSTiwari closed 5 months ago
Pleade check the TVM HOME env variable, which should point to 3rd party/tvm without the include path
Thanks, @tqchen. That resolved the issue.
glad it works
I am getting this same bug on my Macbook (Intel, 15.0) After even updating the path from a whole path to a specific path 3rd party/tvm
But still the same issue.
could you please guide me on it? @tqchen @NSTiwari
@Ammar-Ishfaq I have written a detailed blog about the implementation. Try following it.
It looks like you're following the CUDA/LINUX. I'm on MacBook and the solution of updating that path is not working for me.
Could you point out what I'm doing wrong?
-- VERSION: 0.2.00 CMake Error at /Users/muhammad.ammar/Desktop/Projects/mlc-llm/CMakeLists.txt:72 (tvm_file_glob): Unknown CMake command "tvm_file_glob".
That shouldn't make any difference. What's the exact path that you've set?
@NSTiwari Here's the path I'm using
Previously:/Users/muhammad.ammar/Desktop/Projects/mlc-llm/3rdparty/tvm
Currently: 3rdparty/tvm
The previous path is still correct.
What's not expected: There's another tvm sub-folder inside the include folder, therefore TVM_HOME shouldn't point to it.
Looking at the path (previous path) you've mentioned, it's correct and it should ideally work.
Successfully resolved and compiled.
Steps I followed:
Set TVM_SOURCE_DIR to:
/Users/muhammad.ammar/Desktop/Projects/mlc-llm/3rdparty/tvm
Removed the TVM_HOME variable:
TVM_HOME=/Users/muhammad.ammar/Desktop/Projects/mlc-llm/3rdparty/tvm
Thank you!
Great. A lot of stuff has changed including the environmental variables over the past few months.
Absolutely! Despite the changes and challenges with the environment variables, it's rewarding to work through these kinds of issues.
🐛 Bug
I've successfully compiled the Llama-3-8B-Instruct model using q4f16_1 quantization and converted it into Android compatible file (Llama-3-8B-Instruct-q4f16_1-android.tar).
Now, I'm following the official documentation for on-device deployment of Llama-3-8B-q4f16_1 model on Android.
So far, I've correctly followed the prerequisite step i.e., setting the environment variables for
ANDROID_NDK
,TVM_NDK_CC
,TVM_HOME
,JAVA_HOME
. and installed TVM Unity Compiler as well as MLC LLM Python package using the below command.python3 -m pip install --pre -U -f https://mlc.ai/wheels mlc-ai-nightly
python3 -m pip install --pre -U -f https://mlc.ai/wheels mlc-llm-nightly mlc-ai-nightly
Now, I'm trying to build the Android app from source. but in Step 2: Build Runtime and Model Libraries, I get the following error:
Unknown CMake command tvm_file_glob.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
After running
mlc_llm package
command, I'm expecting theLlama-3-8B-Instruct-q4f16_1-android.tar
file inmlc-package-config.json
to be successfully produce thelibtvm4j_runtime_packed.so
andtvm4j_core.jar
files in thedist/lib/mlc4j/output
folder.Here is the mlc-package-config.json file. mlc-package-config.json
Please help with this.
Environment
Platform (e.g. WebGPU/Vulkan/IOS/Android/CUDA): Android
Operating system (e.g. Ubuntu/Windows/MacOS/...): Ubuntu
Device (e.g. iPhone 12 Pro, PC+RTX 3090, ...)
How you installed MLC-LLM (
conda
, source): python -m pip install --pre -U -f https://mlc.ai/wheels mlc-llm-nightly mlc-ai-nightlyHow you installed TVM-Unity (
pip
, source): python3 -m pip install --pre -U -f https://mlc.ai/wheels mlc-ai-nightlyPython version (e.g. 3.10):
GPU driver version (if applicable):
CUDA/cuDNN version (if applicable):
TVM Unity Hash Tag (
python -c "import tvm; print('\n'.join(f'{k}: {v}' for k, v in tvm.support.libinfo().items()))"
, applicable if you compile models): USE_NVTX: OFF USE_GTEST: AUTO SUMMARIZE: OFF TVM_DEBUG_WITH_ABI_CHANGE: OFF USE_IOS_RPC: OFF USE_MSC: OFF USE_ETHOSU: CUDA_VERSION: NOT-FOUND USE_LIBBACKTRACE: AUTO DLPACK_PATH: 3rdparty/dlpack/include USE_TENSORRT_CODEGEN: OFF USE_THRUST: OFF USE_TARGET_ONNX: OFF USE_AOT_EXECUTOR: ON BUILD_DUMMY_LIBTVM: OFF USE_CUDNN: OFF USE_TENSORRT_RUNTIME: OFF USE_ARM_COMPUTE_LIB_GRAPH_EXECUTOR: OFF USE_CCACHE: AUTO USE_ARM_COMPUTE_LIB: OFF USE_CPP_RTVM: USE_OPENCL_GTEST: /path/to/opencl/gtest TVM_LOG_BEFORE_THROW: OFF USE_MKL: OFF USE_PT_TVMDSOOP: OFF MLIR_VERSION: NOT-FOUND USE_CLML: OFF USE_STACKVM_RUNTIME: OFF USE_GRAPH_EXECUTOR_CUDA_GRAPH: OFF ROCM_PATH: /opt/rocm USE_DNNL: OFF USE_MSCCL: OFF USE_VITIS_AI: OFF USE_MLIR: OFF USE_RCCL: OFF USE_LLVM: llvm-config --ignore-libllvm --link-static USE_VERILATOR: OFF USE_TF_TVMDSOOP: OFF USE_THREADS: ON USE_MSVC_MT: OFF BACKTRACE_ON_SEGFAULT: OFF USE_GRAPH_EXECUTOR: ON USE_NCCL: OFF USE_ROCBLAS: OFF GIT_COMMIT_HASH: c8f7ec8dc0377ad362e1c81b194c6e2322f27a75 USE_VULKAN: ON USE_RUST_EXT: OFF USE_CUTLASS: OFF USE_CPP_RPC: OFF USE_HEXAGON: OFF USE_CUSTOM_LOGGING: OFF USE_UMA: OFF USE_FALLBACK_STL_MAP: OFF USE_SORT: ON USE_RTTI: ON GIT_COMMIT_TIME: 2024-05-09 21:28:05 -0400 USE_HEXAGON_SDK: /path/to/sdk USE_BLAS: none USE_ETHOSN: OFF USE_LIBTORCH: OFF USE_RANDOM: ON USE_CUDA: OFF USE_COREML: OFF USE_AMX: OFF BUILD_STATIC_RUNTIME: OFF USE_CMSISNN: OFF USE_KHRONOS_SPIRV: OFF USE_CLML_GRAPH_EXECUTOR: OFF USE_TFLITE: OFF USE_HEXAGON_GTEST: /path/to/hexagon/gtest PICOJSON_PATH: 3rdparty/picojson USE_OPENCL_ENABLE_HOST_PTR: OFF INSTALL_DEV: OFF USE_PROFILER: ON USE_NNPACK: OFF LLVM_VERSION: 15.0.7 USE_MRVL: OFF USE_OPENCL: OFF COMPILER_RT_PATH: 3rdparty/compiler-rt RANG_PATH: 3rdparty/rang/include USE_SPIRV_KHR_INTEGER_DOT_PRODUCT: OFF USE_OPENMP: OFF USE_BNNS: OFF USE_FLASHINFER: USE_CUBLAS: OFF USE_METAL: OFF USE_MICRO_STANDALONE_RUNTIME: OFF USE_HEXAGON_EXTERNAL_LIBS: OFF USE_ALTERNATIVE_LINKER: AUTO USE_BYODT_POSIT: OFF USE_HEXAGON_RPC: OFF USE_MICRO: OFF DMLC_PATH: 3rdparty/dmlc-core/include INDEX_DEFAULT_I64: ON USE_RELAY_DEBUG: OFF USE_RPC: ON USE_TENSORFLOW_PATH: none TVM_CLML_VERSION: USE_MIOPEN: OFF USE_ROCM: OFF USE_PAPI: OFF USE_CURAND: OFF TVM_CXX_COMPILER_PATH: /opt/rh/gcc-toolset-11/root/usr/bin/c++ HIDE_PRIVATE_SYMBOLS: ONAny other relevant information:
Additional context