mlc-ai / web-llm

High-performance In-browser LLM Inference Engine
https://webllm.mlc.ai
Apache License 2.0
12.35k stars 779 forks source link

Gemma 2 2B crashes on mobile phone #524

Open flatsiedatsie opened 1 month ago

flatsiedatsie commented 1 month ago

Whenever I try to load it, it crashes Chrome.

This is on a Pixel 6a with 6Gb of RAM.

To make sure it wasn't simply too big, I tried running Gemma 2 2B via Wllama (1.63GB Q4 .gguf). That did run.

Additional tests

CharlieFRuan commented 1 month ago

Do you happen to have the console log? Besides, what is the maxStorageBufferBindingSize in your webgpureport.org?

flatsiedatsie commented 1 month ago

It's 2GB.

// fulll screenshots:

Screenshot 2024-08-05 at 09 24 21 Screenshot 2024-08-05 at 09 24 36
CharlieFRuan commented 1 month ago

It may be due to one of the limits being exceeded (not necessarily the buffer size, 2GB sounds enough). Gemma requires a larger size for certain buffers than other models due to its large vocab size 256K, compared to other models like Llama3.1 being 128K. I might have to look into this later

Edit: actually, just saw that you mentioned Phi 3 Mini crashes as well. I will try to look into this. Meanwhile, if you have some sort of log, it would be very helpful, perhaps with remote debugging.

flatsiedatsie commented 1 month ago

I'm already using USB debugging, so I can help you there.

What kind of info would you like? Is there a debug logging mode I can activate?

// edit: I went through my recent error screenshots and got a few that belong to Web-LLM. Not sure to what degree these relate to this issue though.

Screenshot 2024-08-04 at 14 30 41 Screenshot 2024-08-04 at 15 57 12 Screenshot 2024-08-04 at 15 58 15
CharlieFRuan commented 1 month ago

Ahh yes, there is a DEBUG mode here: https://github.com/mlc-ai/web-llm/issues/519#issuecomment-2263648799

Any log that may relate to the crash would be helpful, thanks!

flatsiedatsie commented 1 month ago

I'm using a slightly different UI, my own project :-)

Can I enable debug mode from Javascript?

Screenshot 2024-08-05 at 23 17 41
CharlieFRuan commented 1 month ago

Ah yes! There is a logLevel option in EngineConfig. You can set it to INFO like here https://github.com/mlc-ai/web-llm/blob/main/examples/simple-chat-ts/src/simple_chat.ts#L345

flatsiedatsie commented 1 month ago

Already found it, thanks :-)

window.web_llm_worker = new Worker(
                        new URL('./web_llm_worker.js', import.meta.url), { type: 'module' }
                    )

                    // Creating the WebLLM engine
                    window.web_llm_engine = await webllm.CreateWebWorkerMLCEngine(
                        window.web_llm_worker,
                        web_llm_model_id,
                        { 
                            initProgressCallback: function (mes) { 
                                //console.log('WebLLM init progress message received: ', mes); 
                                window.handle_web_llm_init_progress(mes); 
                            }, 
                            appConfig: window.web_llm_app_config,
                            logLevel: "DEBUG"
                        },
                        chatOpts
                    );
flatsiedatsie commented 1 month ago

What the heck.. now that I've enabled debugging.. Gemma 2 2B suddenly works 0_0.

Phi 3 mini crashed, but retrying a few times I managed to get a response!

So strange.

Screenshot 2024-08-06 at 01 16 47

// ..and then it crashed again. No interesting output in the debug though.

CharlieFRuan commented 1 month ago

I see... thanks for the info!

CharlieFRuan commented 1 month ago

There are various issues similar to this on mobile devices, probably something related to WebGPU on Android Chromes. I don't have something on top of my mind. Not sure if updating Android version and using the latest Chrome Canary would alleviate.

flatsiedatsie commented 1 month ago

The phone went into standby, and then when I woke it up and tried running inference I saw this:

Screenshot 2024-08-06 at 09 22 23

It seems to be related to 'losing the WebGPU'. Should I call MLCEngine.reload(model) before each inference? Or can I detect if the model has been removed from memory by the OS somehow? How can I hook into A valid external Instance reference no longer exist?

CharlieFRuan commented 1 month ago

Quick question, are you using WebWorker, ServiceWorker, or the plain MLCEngine? For ServiceWorker, my understanding is that this PR has fixed this: https://github.com/mlc-ai/web-llm/pull/471

flatsiedatsie commented 1 month ago

WebWorker.

I noticed I hadn't put a try-catch around WebLLM there (a testament to it's quality), but I've added that now in the hopes of catching the GPU disappeared event, and then simple restarting the engine.

WebLLM says "please initialize again", but what a setting to let WebLLM do this by itself? "Stay alive until told otherwise" could even be a default?

CharlieFRuan commented 1 month ago

This seems to be an issue where, the web worker is terminated due to the phone going standby, but your frontend logic's states are still preserved, hence directly sending a request, expecting the model to be loaded. We had similar issue with service worker before: https://github.com/mlc-ai/web-llm/pull/471.

This PR https://github.com/mlc-ai/web-llm/pull/533 moves the fix for service worker to web worker as well. You can test it locally, or try it out when the new npm is published.

The main logic is that, when the backend realizes there is a mismatch between the frontend's expected loaded model, and the backend's actually-loaded model, the backend calls reload() internally.

CharlieFRuan commented 1 month ago

This should be added to npm 0.2.56. Let me know if the issue is fixed!