mlc-ai / mlc-llm

Universal LLM Deployment Engine with ML Compilation
https://llm.mlc.ai/
Apache License 2.0
19.19k stars 1.58k forks source link

org.apache.tvm.Base$TVMError: InternalError: Check failed: (e == CL_SUCCESS) is false: OpenCL Error, code=-54: CL_INVALID_WORK_GROUP_SIZE [Bug] #2088

Closed simranKa closed 6 months ago

simranKa commented 7 months ago

🐛 Bug

Model initialised successfully and displayed "ready to chat" message. Getting issue on message send. Stack trace: File "/mlc-llm/3rdparty/tvm/src/runtime/opencl/opencl_module.cc", line 90

at org.apache.tvm.Base.checkCall(Base.java:173)
at org.apache.tvm.Function.invoke(Function.java:130)
at ai.mlc.mlcllm.ChatModule.prefill(ChatModule.java:54)
at ai.mlc.mlcchat.AppViewModel$ChatState$requestGenerate$1$1.invoke(AppViewModel.kt:666)
at ai.mlc.mlcchat.AppViewModel$ChatState$requestGenerate$1$1.invoke(AppViewModel.kt:666)
at ai.mlc.mlcchat.AppViewModel$ChatState.callBackend(AppViewModel.kt:548)
at ai.mlc.mlcchat.AppViewModel$ChatState.requestGenerate$lambda$4(AppViewModel.kt:666)
at ai.mlc.mlcchat.AppViewModel$ChatState.$r8$lambda$lluIrcsPALEW5nCb2tohZYadhTY(Unknown Source:0)
at ai.mlc.mlcchat.AppViewModel$ChatState$$ExternalSyntheticLambda3.run(Unknown Source:6)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:462)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.lang.Thread.run(Thread.java:923)

Error message: InternalError: Check failed: (e == CL_SUCCESS) is false: OpenCL Error, code=-54: CL_INVALID_WORK_GROUP_SIZE Stack trace: File "/mlc-llm/3rdparty/tvm/src/runtime/opencl/opencl_module.cc", line 90

Environment

I have tried following models non of these worked:

  1. gemma-2b-it-q4f16_1-MLC
  2. TinyLlama-1.1B-Chat-v0.4-q4f16_1-MLC
  3. WizardMath-7B-V1.1-q4f16_1-MLC
  4. gpt2-q4f16_1-MLC
  5. Llama-2-7b-chat-hf-q4f16_1-MLC (Getting crash)
  6. RedPajama-INCITE-Chat-3B-v1-q4f16_1-MLC
  7. Mistral-7B-Instruct-v0.2-q4f16_1-MLC (Getting crash)
  8. phi-2-q4f16_1-MLC
Hzfengsy commented 7 months ago

Which Soc do you use? And could you please check if it works on mobile phones?

simranKa commented 7 months ago

Yes, I have tested it on mobile phones it is working but my priority is AOSP devices. chipset device using :

  1. Realwear T21G navigator : 2.0 GHz 8-core Qualcomm® Snapdragon™ 662 with Adreno 610 GPU - OpenGL® ES 3.2 & OpenCL™ 2.0 (4GB RAM)
  2. Realwear T1100G HMT : 2.0 GHz 8-core Qualcomm® Snapdragon™ 625 with Adreno 506 GPU – OpenGL ES 3.1 & OpenCL 2.0 (2GB RAM)
tqchen commented 6 months ago

likely we need to further restrict the group sizes and these device memory are too small for the llama style models