Closed rojitdhakal closed 2 days ago
static async getInstance(progress_callback = null) {
if (this.instance === null) {
this.instance = pipeline(this.task, this.model, {
quantized: this.quantized,
progress_callback,
});
}
console.log("inside",this.instance)
return this.instance;
} '
while consoling this.instances it shows
Promise {<pending>}
[[Prototype]]
:
Promise
[[PromiseState]]
:
"rejected"
[[PromiseResult]]
:
Error: Unsupported model type: whisper at AutoModelForCTC.from_pretrained (webpack-internal:///./node_modules/.pnpm/@xenova+transformers@2.6.0/node_modules/@xenova/transformers/src/models.js:3550:19) at async eval (webpack-internal:///./node_modules/.pnpm/@xenova+transformers@2.6.0/node_modules/@xenova/transformers/src/pipelines.js:2087:33)
message
:
"Unsupported model type: whisper"
stack
:
"Error: Unsupported model type: whisper\n at AutoModelForCTC.from_pretrained (webpack-internal:///./node_modules/.pnpm/@xenova+transformers@2.6.0/node_modules/@xenova/transformers/src/models.js:3550:19)\n at async eval (webpack-internal:///./node_modules/.pnpm/@xenova+transformers@2.6.0/node_modules/@xenova/transformers/src/pipelines.js:2087:33)"
Hi there. I believe this is due to an issue we just fixed in v2.6.1 (related to minification). Could you please upgrade to v2.6.1 and try again? Thanks!
I just upgraded v2.6.1 again the same error persists??
Could you please post information about your environment, e.g., OS, browser, built tools?
I am aware of a similar issue with users that use create-react-app, and if this is the case, please switch to a more up-to-date build tool like Vite.
OS: Windows 11 Browser: Chrome-117.0.5938.89 build tools: create-next-app
we are using next JS ? There is no any support Vite for next js apllication
Oh my apologies, I misread "create-next-app" as "create-react-app". Sorry about that!
Could you post any information about your build process, such as any minification taking place?
I am facing this locally in development server without minification .
Do you perhaps have a repo where I can try reproduce this? Or could you post your next.config.js? Thanks!
We are currently working in the private repo. We will share the repo later if required, need to prepare for that , but Now Here's the next config
/** @type {import('next').NextConfig} */
const nextConfig = {
reactStrictMode: true,
compress: false,
images: {
loader: "akamai",
path: "",
},
compiler: {
// Enables the styled-components SWC transform
styledComponents: true,
},
// lessLoaderOptions: {
// lessOptions: {
// javascriptEnabled: true,
// },
// },
webpack(config) {
config.module.rules.push({
test: /\.svg$/,
use: ["@svgr/webpack"],
});
return config;
},
};
module.exports = nextConfig;
And which version of node / next.js / npm are you using?
next-version:13.4.13 node-version:16.15.0 pnpm-version :7.23.0
node-version:16.15.0
This might be the issue. In the docs, we recommend using a minimum node version of 18. 16.X has reached EOL. Could you try upgrade?
I tried to run whisper model via automatic-speech-recognition
pipeline and got same error caused by unsupported AutoModelForCTC
, this PR might have introduced bug:
https://github.com/xenova/transformers.js/pull/220/files?file-filters%5B%5D=.js&show-viewed-files=true#diff-2f6b66f61363f7b45e1b165f81d3ce15b3768da43e40410085aee8bd8666a629R1739
@szprytny Could you provide more information about your environment? Are you using the latest version of Transformers.js?
I have
node 18.9.1
transformers.js 2.6.2
When I removed declaration of AutoModelForCTC
from https://github.com/xenova/transformers.js/blob/main/src/pipelines.js#L1953
Pipeline went further. I got error Unsupported model IR version: 9
which I was able to pass by overriding onnxruntime-node in my project's package.json
And which bundler are you using? I am aware of issues with create-react-app. I haven't had any problems with vite, for example.
I got error Unsupported model IR version: 9
Yes this is because you exported with onnx >= 14
, and Transformers.js still uses onnxruntime-web v1.14 (which only supports a max IR version of 8). See here for an issue I files a while ago.
I did not run it as a web-app, I just tried to do inference using plain node script running with npx tsx
@szprytny Can you provide some sample code which resulted in this error?
It seems that error
Unsupported model type: whisper
is misleading as the real problem was my model have newer IR version.
It seems that error related to this is not handled well enough and results in calling from_pretrained
for AutoModelForCTC
class in loadItems
function
Here is the script I used to run it
import { WaveFile } from "wavefile";
import path from "path";
import { readFileSync } from "fs";
import { pipeline, env } from "@xenova/transformers";
env.localModelPath = "c:/model/onnx/";
const prepareAudio = (filePath: string): Float64Array => {
const wav = new WaveFile(readFileSync(path.normalize(filePath)));
wav.toBitDepth("32f");
wav.toSampleRate(16000);
let audioData = wav.getSamples();
return audioData;
};
const test = async () => {
let pipe = await pipeline("automatic-speech-recognition", "shmisper", {
local_files_only: true,
});
let out = await pipe(prepareAudio("c:/content/01_0.wav"));
console.log(out);
};
test();
I see... Indeed, that error message would be quite misleading. Could you try downgrade to onnx==1.13.1
and re-export your model? See https://github.com/xenova/transformers.js/blob/main/scripts/requirements.txt for the other recommended versions.
I have the extact same problem. I changed the onnx version to 1.13.1. Small model works but not medium and large-v2 models
Having same issue as main thread:
transformers.js 2.10.1
You mentioned here that we should use onnx==1.13.1
per your conversion scripts. Does Huggingface's Optimum conversion script (ie optimum-cli export onnx --model model_id
also work with your script? I noticed it doesnt move all the ONNX files to their own folder (something I can do manually), but is the conversion processes of exporting to ONNX the same? If so, is optimum-cli using a different version of onnx than what your repo is using?
Yes, we use optimum behind the scenes. The purpose of the conversion script is to also perform quantization afterwards, but if this is not necessary for your use-case, you can use optimum directly and just structure the repo as the other transformers.js models on the HF Hub.
I converted the whisper-base
model to onnx using optimum-cli
and moved the model files to the onnx folder locally and verified my env had the same modules from your requirements.txt
version. Why I tried to run my inference script (NodeJS) here I still end up with errors output.txt
@xenova I could reproduce this error on the v3 branch with the example whisper-word-timestamps
. If I go to worker.js
and change the model_id
from onnx-community/whisper-base_timestamped
to Xenova/whisper-large-v3
I get the error: Unsupported model type: whisper
same,
when trying to use distil-whisper/distil-medium.en on Whisper WebGPU Unsupported model type: whisper "@huggingface/transformers": "^3.0.0-alpha.9",