xenova / transformers.js

State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!
https://huggingface.co/docs/transformers.js
Apache License 2.0
11.01k stars 675 forks source link

musicgen example run error on lastest v3 #865

Open purplekidlouis opened 1 month ago

purplekidlouis commented 1 month ago

System Info

mac chrome, webgpu support

Environment/Platform

Description

Hi, I just built the MusicGen example using the latest version (v3), and I encountered an error during runtime. The error appears to be related to the session option "preferredOutputLocation" not being supported for the proxy. This issue prevents the inference session from being created, and subsequently, the model fails to load.

Uncaught (in promise) Error: session option "preferredOutputLocation" is not supported for proxy. at Hd (ort.webgpu.min.js:2309:378109) at fi.loadModel (ort.webgpu.min.js:2309:379837) at hi.createInferenceSessionHandler (ort.webgpu.min.js:2309:381518) at e.create (ort.webgpu.min.js:6:18089) at async createInferenceSession (onnx.js:91:1) at async models.js:261:1 at async Promise.all (:63342/transformers_js_musicgen/index 0) at async constructSessions (models.js:258:1) at async Promise.all (:63342/transformers_js_musicgen/index 0) at async MusicgenForConditionalGeneration.from_pretrained (models.js:873:1) Hd @ ort.webgpu.min.js:2309 loadModel @ ort.webgpu.min.js:2309 createInferenceSessionHandler @ ort.webgpu.min.js:2309 create @ ort.webgpu.min.js:6 await in create (async) (anonymous) @ index.html?_ijt=jvn9pg0aj4mc5attpnacuk321e&_ij_reload=RELOAD_ON_SAVE:326 Show 4 more frames Show less

Reproduction

Steps to Reproduce:

Build the MusicGen example using the latest version (v3). Run the example.

flatsiedatsie commented 1 month ago

Did you try running this on CPU only? I don't think GPU is supported for this model yet.

This is how I run it on CPU inside a worker:

AutoTokenizer.from_pretrained('Xenova/musicgen-small')
.then((tokenizer) => {
    self.tokenizer = tokenizer;
    return MusicgenForConditionalGeneration.from_pretrained('Xenova/musicgen-small', {
                          progress_callback: (progress_data) => {
            //console.log("MUSICGEN WORKER: model download progress_callback: progress_data: ", progress_data);
                             if (progress_data.status !== 'progress') return;
            //setLoadProgress(prev => ({ ...prev, [data.file]: data }))
            ///setLoadProgress(data);
            self.postMessage(progress_data);
                          },
                          dtype: {
                            text_encoder: 'q8',
                            decoder_model_merged: 'q8',
                            encodec_decode: 'fp32',
                          },
                          device: 'wasm',
                });
})
.then((model) => {
    console.log("MUSICGEN WORKER: created model: ", model);
    self.model = model;
    do_musicgen(sentence);

})
.catch((err) => {
    console.error("MUSICGEN_WORKER: caught error creating tokenizer: ", err);
    reject({"status":"error","error":"Caught error creating MusicGen tokenizer or model","task": message.task});
    return err
})