mlc-ai / web-llm

High-performance In-browser LLM Inference Engine
https://webllm.mlc.ai
Apache License 2.0
13.9k stars 896 forks source link

cannot read properties of undefined - description #572

Closed flatsiedatsie closed 2 months ago

flatsiedatsie commented 2 months ago

I'm trying the Phi3.5 vision demo, but got the error below.

Screenshot 2024-09-23 at 14 22 43

I proceeded to wrap that bit of code in if(K.adapterInfo){ ... }

And then it started working.

Another thing I then ran into: I'm not allowed to use relative image paths?

Screenshot 2024-09-23 at 14 34 18
flatsiedatsie commented 2 months ago

I've created a quick demo on Github pages:

https://flatsiedatsie.github.io/web_llm_vision/

and then I created a Reddit post about your awesome work here:

https://www.reddit.com/r/LocalLLaMA/comments/1fnjnkc/webllm_has_added_support_for_its_first_vision/

CharlieFRuan commented 2 months ago

I haven't ran into this issue yet, but below are some info.

The adapterInfo is defined here: https://github.com/tqchen/tvm/blob/412aec4e2df8f01ee15f3165d9f740af5190866d/web/src/webgpu.ts#L27-L31

The adapterInfo is populated here: https://github.com/tqchen/tvm/blob/412aec4e2df8f01ee15f3165d9f740af5190866d/web/src/webgpu.ts#L119-L133

Specifically, GPUAdapter.info (i.e. the adapterInfo) is documented here: https://www.w3.org/TR/webgpu/#gpuadapter

I am not very sure why info could be undefined. Though there is indeed a related API deprecation on requestAdapterInfo() from the WebGPU side, where we adjusted this way: https://github.com/apache/tvm/pull/17371

CharlieFRuan commented 2 months ago

I wonder if it is because your browser still prefers requestAdapterInfo(). If not too inconvenient, could you try updating the browser or using a different browser to see if the same issue is observed? Thanks!

flatsiedatsie commented 2 months ago

Sure, I'll give it a try.

flatsiedatsie commented 2 months ago

I removed the checking code:

if(K.adapterInfo){    ...    }

And then enabled WebGPU on Safari. I saw the same error:

Screenshot 2024-09-26 at 11 38 54

Line 27

flatsiedatsie commented 2 months ago

I just found out this is now interfering with all WebLLM inference, not just the Vision model.

Screenshot 2024-09-26 at 13 08 47
slash-under commented 2 months ago

This issue affects me as well on Thorium (Chromium) 124 with an Nvidia GPU on Windows, and the proposed solution works for me. As far as the GPUAdapter reference goes, there are some missing members, including GPUAdapterInfo.

image

CharlieFRuan commented 2 months ago

Thanks for all the info, I'll fix it today

CharlieFRuan commented 2 months ago

Though I believe using the latest chrome should fix it; but I'll make sure we are backward compatible

slash-under commented 2 months ago

The issue persists with the latest Thorium (126) but it looks like there have been further releases upstream....

Edit: there are no issues when using a nightly build of Chromium: 131.0.6742.0

CharlieFRuan commented 2 months ago

Thanks for the info, fix is under the way

CharlieFRuan commented 2 months ago

0.2.70 should include the fix, please let me know whether it worked on your end, thank you!

slash-under commented 2 months ago

the fix does not appear to work on my end, but this may be an unrelated bug.

The updated code is in the index.js file, see here: index.js

CharlieFRuan commented 2 months ago

Hmm what is the bug you are seeing?

flatsiedatsie commented 2 months ago

Still seeing the same error:

Screenshot 2024-09-27 at 06 58 27

// hmm, code still says 2.69

flatsiedatsie commented 2 months ago

Could it be that the CDN version doesn't always get updated properly?

https://cdn.jsdelivr.net/npm/@mlc-ai/web-llm/+esm

Screenshot 2024-09-27 at 07 28 28
CharlieFRuan commented 2 months ago

The CDN seems to take a bit to get updated. Perhaps you can try specify 0.2.70 specifically? https://www.jsdelivr.com/package/npm/@mlc-ai/web-llm

flatsiedatsie commented 2 months ago

Thanks, that helped.

New error with V2.7:

Screenshot 2024-09-27 at 07 37 35

(no other details in the console)

CharlieFRuan commented 2 months ago

Which version of brave are you on? I'll try to reproduce on my end

CharlieFRuan commented 2 months ago

Ahh I can reproduce on [Version 1.65.133 Chromium: 124.0.6367.208 (Official Build) (arm64). Taking a look

CharlieFRuan commented 2 months ago

Turns out I missed an await. 0.2.71 should fixed this thoroughly: https://github.com/mlc-ai/web-llm/pull/583. I confirmed with Brave 1.65.133

Apologies for the inconvenience!

flatsiedatsie commented 2 months ago

Glad you found a bug! I just realized I wasn't on the latest version of Brave:

Version 1.67.123 Chromium: 126.0.6478.126 (Official Build) (arm64)

I've upgraded now, and the problem is gone, even without being on 0.2.71.

(and never any need to apologize, the fact that WebLLM even exists is awesome enough)

CharlieFRuan commented 2 months ago

Glad the issue is solved! Will close this one. Feel free to reopen / open new ones if issues persist

slash-under commented 2 months ago

it seems this also is how hardware issues are handled. If the GPU process dies a certain way, you may end up with this error upon reload.

image

slash-under commented 2 months ago

image

the transformed code appears to be in the build, as well

slash-under commented 2 months ago

it seems that the web worker is the issue, and it does not have the latest patch: image

image

CharlieFRuan commented 2 months ago

I wonder if it is because you need to clear some cached files. I feel like the index.js on npm is kind of the source of truth of the most updated code: https://www.npmjs.com/package/@mlc-ai/web-llm?activeTab=code

IIUC, the const adapterInfo line should appear only once, since WebWorker and non-WebWorker call on the same helper function implemented in TVMjs.

Perhaps do something like rm -r lib/ node_modules/ dist/ .parcel-cache and re-do npm install and npm run build

image
slash-under commented 2 months ago

I removed .next/ and things worked after, the package was okay but intermediates needed updated

CharlieFRuan commented 2 months ago

I see! Glad it worked!