mlc-ai / web-llm-chat

Chat with AI large language models running natively in your browser. Enjoy private, server-free, seamless AI conversations.
https://chat.webllm.ai/
Apache License 2.0
380 stars 58 forks source link

support ios safari #40

Closed fyears closed 4 months ago

fyears commented 4 months ago

Problem Description

I am using iPhone and iOS 17.5.1

Initially iOS Safari browser doesn’t enable WebGPU so the oneline web page chat is not available.

However, even when I enable WebGPU in Safari settings, I still cannot run the web page chat.

Solution Description

Make it work on iOS Safari.

Alternatives Considered

No response

Additional Context

No response

fyears commented 4 months ago

ok well, after I kill and restart Safari, it seems something is working now.


Fetching param cache[3/108]: 134MB fetched. 3% completed, 47 secs elapsed. It can take a while when we first visit this page to populate the cache. Later refreshes will become faster.
fyears commented 4 months ago

I switch to tinyllama. It fetches ~600mb params.

Then:

Loading model from cache[11/24]: 591MB loaded. 100% completed, 128 secs elapsed.

Error: PackedFunc has already been disposed

I guess there is still something wrong.