Closed Mgz-97 closed 1 year ago
Please check your TVM installation. I don't think you are using TVM Unity. https://mlc.ai/mlc-llm/docs/install/tvm.html
thanks for reply. I did follow the Option 2. Build from SourceΒΆ, and successfully verify it with the script Validate InstallationΒΆ..So use the https://huggingface.co/TheBloke/vicuna-7B-1.1-HF is okay?
Hi, @junrushao ,And the https://huggingface.co/mlc-ai/mlc-chat-RedPajama-INCITE-Chat-3B-v1-q4f16_0/tree/main works well in my build, just running the vicuna has this issue.
To be clear, vm_load_executable
is defined on this line in TVM: https://github.com/mlc-ai/relax/blob/1b87dce4aee9f82d657808e9b244e335fd3cb8f0/src/runtime/relax_vm/executable.cc#L63
It means if you are using a correct TVM distribution, you will find this method properly. Therefore, it's not about any particular model. Please do check again if there are multiple different TVM distros you have installed in your system π
commit bd5ee5888f4ee1e9f14ffcbc335186efb21880f (HEAD -> mlc, origin/mlc, origin/HEAD) Author: tqchen tianqi.tchen@gmail.com Date: Mon Jun 12 22:23:11 2023 -0400
[CherryPick][ARITH] Improve arith simplify to handle symbolic reshape pattern
This PR enhances arith simplify to handle symbolic reshape patterns.
Lift the CombineIters to callers of TryFuseIters so they can be used
in early return simplifications. Testcases are added.
Also updates a minor spelling issue in the testcase.
<CDLL '/home/owen/anaconda3/envs/tvm-build-venv/lib/python3.9/site-packages/tvm-0.12.dev1094+gbd5ee5888-py3.9-linux-x86_64.egg/tvm/libtvm.so', handle 5609a1d288a0 at 0x7f7ee0b1fbb0>
Traceback (most recent call last):
File "
thanks for explanation, and i confirm that i use the lastest tvm distros in my tvm-build-venv? but i am not sure the android build use the tvm version.. how to verify the andoird build use the right version tvm...
I really want to help, but according to the error message, the only possibility is that you might be using a mismatched TVM runtime (TVM4J), which is supposed to be built on Step 4 here: https://mlc.ai/mlc-llm/docs/tutorials/runtime/android.html#app-build-instructions. Would you mind double checking the instructions you used to build TVM4J?
π Bug
TVMError:
An error occurred during the execution of TVM. For more information, please see: https://tvm.apache.org/docs/errors.html
Check failed: (fload_exec.defined()) is false: TVM runtime cannot find vm_load_executable Stack trace: File "/home/*/MLC_NEW/tvm-unity/jvm/mlc-llm/cpp/llm_chat.cc", line 246
To Reproduce
Steps to reproduce the behavior:
1.we use this https://huggingface.co/TheBloke/vicuna-7B-1.1-HF to prepare the lib for vicuna
Expected behavior
Environment
python -c "import tvm; print('\n'.join(f'{k}: {v}' for k, v in tvm.support.libinfo().items()))"
, applicable if you compile models):Additional context