mlc-ai / mlc-llm

Universal LLM Deployment Engine with ML Compilation
https://llm.mlc.ai/
Apache License 2.0
19.09k stars 1.56k forks source link

[Bug] Memory error after opening installed model on android app #1382

Closed noknownerrors closed 8 months ago

noknownerrors commented 11 months ago

πŸ› Bug

To Reproduce

Steps to reproduce the behavior:

  1. I compiled this model: https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-1-7B
  2. copy it onto my device with the steps from https://llm.mlc.ai/docs/deploy/android.html
  3. Open the app on my s23 ultra. Click chat -> say hello -> crash

/home/user/documents/mlc-llm/3rdparty/tvm/src/runtime/opencl/opencl_device_api.cc:238: InternalError: Check failed: (err_code == CL_SUCCESS) is false: OpenCL Error, code=-61: CL_INVALID_BUFFER_SIZE 2023-12-04 18:37:04.362 13338-13419 TVM_RUNTIME ai.mlc.mlcchat A /home/user/documents/mlc-llm/3rdparty/tvm/include/tvm/runtime/packed_func.h:1346: unknown type = 0 2023-12-04 18:37:04.362 13338-13419 TVM_RUNTIME ai.mlc.mlcchat A /home/user/documents/mlc-llm/3rdparty/tvm/src/runtime/memory/memory_manager.cc:162: Allocator for 2023-12-04 18:37:04.362 13338-13419 libc++abi ai.mlc.mlcchat E terminating due to uncaught exception of type tvm::runtime::InternalError: [18:37:04] /home/user/documents/mlc-llm/3rdparty/tvm/src/runtime/memory/memory_manager.cc:162: Allocator for Stack trace not available when DMLC_LOG_STACK_TRACE is disabled at compile time. 2023-12-04 18:37:04.364 13338-13419 libc ai.mlc.mlcchat A Fatal signal 6 (SIGABRT), code -1 (SI_QUEUE) in tid 13419 (pool-3-thread-1), pid 13338 (ai.mlc.mlcchat)

Expected behavior

Chat with the model

Environment

Additional context

CharlieFRuan commented 8 months ago

Hi we recently updated our Android flow and the documentation correspondingly: https://github.com/mlc-ai/mlc-llm/pull/1494. Check if the error still persists.

Besides, this could also be an issue of the model demanding too much memory. Try to tweak prefill_chunk_size, context_window_size and sliding_window_size when compiling the model; for more about these params, see python -m mlc_chat gen_config --help.

We also print out an estimated memory requirement for each model compiled, something like this would be printed out:

[2024-02-08 20:49:38] INFO model_metadata.py:95: Total memory usage: 3730.64 MB (Parameters: 3615.13 MB. KVCache: 0.00 MB. Temporary buffer: 115.51 MB)
[2024-02-08 20:49:38] INFO model_metadata.py:104: To reduce memory usage, tweak `prefill_chunk_size`, `context_window_size` and `sliding_window_size`
CharlieFRuan commented 8 months ago

Closing this one for now; feel free to open another one if issues persist!