Open Big-Boy-420 opened 3 weeks ago
cc @CharlieFRuan Could you take a quick look at your convenience?
Likely the TVM is not cloned recursively. Find the TVM repo you are using locally and do git submodule --init --recursive
. Or git clone --recursive url-to-repo
in the first place
Just saw that it is done in the command. Will revisit later today
Could you check whether mlc-llm/3rdparty/tvm is empty? And if /content/tvm is empty? Thanks!
Hi,
Thanks for getting back to me.
As requested:
contents of mlc-llm/3rdparty/tvm:
3rdparty CMakeLists.txt CONTRIBUTORS.md golang LICENSE NEWS.md README.md version.py
apps conda docker include licenses NOTICE rust vta
ci configs docs jvm Makefile pyproject.toml src web
cmake conftest.py gallery KEYS mypy.ini python tests
contents of tvm:
3rdparty CMakeLists.txt CONTRIBUTORS.md golang LICENSE NEWS.md README.md version.py
apps conda docker include licenses NOTICE rust vta
ci configs docs jvm Makefile pyproject.toml src web
cmake conftest.py gallery KEYS mypy.ini python tests
I see you have TVM_SOURCE_DIR_SET=/content/tvm/3rdparty/tvm
. It should be TVM_SOURCE_DIR_SET=/content/tvm
That has got it further - I am thinking that was perhaps a misunderstanding from my part based on Step 2 - or perhaps adjust the wording so its more obvious? :p
It gets onto the compile stage now but that fails with the below output:
[2024-11-04 20:38:01] INFO pipeline.py:54: Compiling external modules
[2024-11-04 20:38:01] INFO pipeline.py:54: Compilation complete! Exporting to disk
[20:38:06] /workspace/tvm/src/target/llvm/codegen_llvm.cc:185: Warning: Set native vector bits to be 128 for wasm32
Traceback (most recent call last):
File "/usr/local/bin/mlc_llm", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.10/dist-packages/mlc_llm/__main__.py", line 33, in main
cli.main(sys.argv[2:])
File "/usr/local/lib/python3.10/dist-packages/mlc_llm/cli/compile.py", line 129, in main
compile(
File "/usr/local/lib/python3.10/dist-packages/mlc_llm/interface/compile.py", line 243, in compile
_compile(args, model_config)
File "/usr/local/lib/python3.10/dist-packages/mlc_llm/interface/compile.py", line 188, in _compile
args.build_func(
File "/usr/local/lib/python3.10/dist-packages/mlc_llm/support/auto_target.py", line 258, in build
relax.build(
File "/usr/local/lib/python3.10/dist-packages/tvm/relax/vm_build.py", line 146, in export_library
return self.mod.export_library(
File "/usr/local/lib/python3.10/dist-packages/tvm/runtime/module.py", line 628, in export_library
return fcompile(file_name, files, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/tvm/contrib/emcc.py", line 74, in create_tvmjs_wasm
all_libs += [find_lib_path("wasm_runtime.bc")[0]]
File "/usr/local/lib/python3.10/dist-packages/tvm/_ffi/libinfo.py", line 166, in find_lib_path
raise RuntimeError(message)
RuntimeError: Cannot find libraries: wasm_runtime.bc
List of candidates:
/usr/lib64-nvidia/wasm_runtime.bc
/content/emsdk/upstream/emscripten/wasm_runtime.bc
/content/emsdk/wasm_runtime.bc
/opt/bin/wasm_runtime.bc
/usr/local/cuda-12.2/bin/wasm_runtime.bc
/usr/local/sbin/wasm_runtime.bc
/usr/local/bin/wasm_runtime.bc
/usr/sbin/wasm_runtime.bc
/usr/bin/wasm_runtime.bc
/usr/sbin/wasm_runtime.bc
/usr/bin/wasm_runtime.bc
/tools/node/bin/wasm_runtime.bc
/tools/google-cloud-sdk/bin/wasm_runtime.bc
/usr/local/lib/python3.10/dist-packages/tvm/wasm_runtime.bc
/usr/local/lib/wasm_runtime.bc
I ran !ls -l ${TVM_SOURCE_DIR}/web/dist/wasm/*.bc
which produced
-rw-r--r-- 1 root root 166140 Nov 4 20:35 /content/tvm/web/dist/wasm/tvmjs_support.bc
-rw-r--r-- 1 root root 4533080 Nov 4 20:35 /content/tvm/web/dist/wasm/wasm_runtime.bc
-rw-r--r-- 1 root root 181292 Nov 4 20:35 /content/tvm/web/dist/wasm/webgpu_runtime.bc
Thank you so much for all your help so far - it looks like we're very close.
Hi @CharlieFRuan, sorry to keep on - any ideas?
Hi,
I'm trying to compile a llama-3.2. I have followed the setup instructions but before I can get to running the
mlc_llm compile
command, I am running./web/prep_emcc_deps.sh
which fails with the following output and error:I am running this on a google colab.
Below is a summary of my steps (there may be one or 2 unnecessary ones here due to my own learning/debugging but I don't think they are the cause - if they are please tell me).
Install everything
Login to HF
Clone target model (this is for test purposes)
Convert the weights
Generate the config
Compile the model (step which breaks)
Please can someone help/advise?
Many thanks,