mlc-ai / mlc-llm

Universal LLM Deployment Engine with ML Compilation
https://llm.mlc.ai/
Apache License 2.0
19.08k stars 1.56k forks source link

Check failed: (fload_exec.defined()) is false: TVM runtime cannot find vm_load_executable #1992

Closed omkar806 closed 7 months ago

omkar806 commented 7 months ago
image

I followed all the step to use the swift api/package and I have used gemma-2b-q4f16 model . But it is giving me this error. I also checked in tvm file I have the executable.cc present in the correct location . Is it due ot q4 quantisation ?

MasterJH5574 commented 7 months ago

Hi @omkar806 thanks for bringing this up. The issue is because the model_lib_path_for_prepare_libs field in app_config.json contains some unrecognized keys. We enhanced the error message in #1993 to provide you the list of candidates. Could you try to checkout the latest mlc-llm and TVM and try out again? Meanwhile, we would appreciate if you can share any feedback you have when seeing the error message and update app_config.json accordingly. Thank you in ahead!

omkar806 commented 7 months ago

So I have to make some changes in app_config.json okay got it . right ?

triensslci commented 7 months ago

@omkar806 I also encountered the same error as you, have you found a solution yet?

omkar806 commented 7 months ago

Hey I am facing other error now that is InternalError: Check failed: (config_istream) is false: , did you face this error . For IOS API which folders do we have to add to the target to like copy all the items or just add as a reference.

MasterJH5574 commented 7 months ago

Closing due to inactivity. The information mentioned in https://github.com/mlc-ai/mlc-llm/issues/1992#issuecomment-2014329151 may be helpful when seeing the similar error.