Closed Kiranism closed 1 year ago
Could you provide the model you are trying to run (or perhaps just the code you're running) as well as the full error logs?
The error message ERR: [wasm] TypeError: (0 , i1.cpus) is not a function
usually occurs when there is a mismatch between the WASM and JS versions of onnxruntime-web.
and latest "onnxruntime-node": "1.14.0" and web same
You say you're on the latest version of onnxruntime-web, but then also list 1.14.0, which is not the latest. Is this perhaps the issue? See here for more info.
I said i used the latest Transformer JS and initially tested it with the feature extraction model Xenova/e5-large-v2, which worked fine. However, I later switched to Xenova/all-minilm-l6-v2
As I mentioned, when I perform embedding when the site is initially loaded, it works. But when I call another function to embed, it throws the error 'model not loaded from path.'
I then attempted to add the .env.localpathmodel
path for the model, which resulted in an 'URL failed to parse' error and no backend error."
I said i used the latest Transformer JS and initially tested it with the feature extraction model Xenova/e5-large-v2, which worked fine. However, I later switched to Xenova/all-minilm-l6-v2
As I mentioned, when I perform embedding when the site is initially loaded, it works. But when I call another function to embed, it throws the error 'model not loaded from path.'
I then attempted to add the
.env.localpathmodel
path for the model, which resulted in an 'URL failed to parse' error and no backend error."
import { pipeline } from "@xenova/transformers";
export async function embeddingTransformer(text: string) {
try {
console.log("transformer initialized")
const generateEmbeddings = await pipeline(
"feature-extraction",
"Xenova/e5-large-v2"
);
const response = await generateEmbeddings(text.replace(/\n/g, " "), {
pooling: "mean",
normalize: true,
});
console.log("getEmb result-=>", response);
return Array.from(response?.data) as number[];
} catch (error) {
console.log("error calling transformer for embeddings ai", error);
throw error;
}
}
Please check your package-lock.json to see which versions are installed. If it is correct, could you try a clean install with npm ci
?
hey i tried with clean installation too
"node_modules/@xenova/transformers": {
"version": "2.6.0",
"resolved": "https://registry.npmjs.org/@xenova/transformers/-/transformers-2.6.0.tgz",
"integrity": "sha512-k9bs+reiwhn+kx0d4FYnlBTWtl8D5Q4fIzoKYxKbTTSVyS33KXbQESRpdIxiU9gtlMKML2Sw0Oep4FYK9dQCsQ==",
"dependencies": {
"onnxruntime-web": "1.14.0",
"sharp": "^0.32.0"
},
"optionalDependencies": {
"onnxruntime-node": "1.14.0"
}
},
and the logs getting after the 2nd fn call is
`- wait compiling /api/chat/route (client and server)...
- event compiled successfully in 1551 ms (470 modules)
transformer initialized
Unable to load from local path "/models/Xenova/all-MiniLM-L6-v2/tokenizer.json": "TypeError: Failed to parse URL from /models/Xenova/all-MiniLM-L6-v2/tokenizer.json"
Unable to load from local path "/models/Xenova/all-MiniLM-L6-v2/tokenizer_config.json": "TypeError: Failed to parse URL from /models/Xenova/all-MiniLM-L6-v2/tokenizer_config.json"
Unable to load from local path "/models/Xenova/all-MiniLM-L6-v2/config.json": "TypeError: Failed to parse URL from /models/Xenova/all-MiniLM-L6-v2/config.json"
Unable to load from local path "/models/Xenova/all-MiniLM-L6-v2/onnx/model_quantized.onnx": "TypeError: Failed to parse URL from /models/Xeerror calling transformer for embeddings ai [Error: no available backend found. ERR: [wasm] TypeError: (0 , i1.cpus) is not a function]
some error happended in chatCompletion [Error: no available backend found. ERR: [wasm] TypeError: (0 , i1.cpus) is not a function]
- error node_modules\next\dist\esm\server\future\route-modules\app-route\module.js (211:60) @ headers
- error Cannot read properties of undefined (reading 'headers')
null`
And which version of node.js / npm are you using?
npm 9.5.0 node i tired with 19.7.0 & 18.14.2
Have you tried running this example project before (docs)? Alternatively, if you are able to make a repo for me to check, that would be helpful in debugging.
Sure, let me try the example you provided, and I will create a repository for you to test. By the way, it was working fine on my local environment a few days ago. However, when I called the 'embed' function after splitting the documents into chunks, it worked well. But when I called this function with a query, I started encountering the error.
@Kiranism were you able to resolve your issue? Im having the same problem. When I first call my embed function to get embeddings for storage it works. But when I use the embed function to get embeddings for a query it does not work and I get the same exact errors:
Unable to load from local path "/models/Xenova/all-MiniLM-L6-v2/tokenizer.json": "TypeError: Failed to parse URL from /models/Xenova/all-MiniLM-L6-v2/tokenizer.json"
Unable to load from local path "/models/Xenova/all-MiniLM-L6-v2/tokenizer_config.json": "TypeError: Failed to parse URL from /models/Xenova/all-MiniLM-L6-v2/tokenizer_config.json"
Unable to load from local path "/models/Xenova/all-MiniLM-L6-v2/config.json": "TypeError: Failed to parse URL from /models/Xenova/all-MiniLM-L6-v2/config.json"
Unable to load from local path "/models/Xenova/all-MiniLM-L6-v2/onnx/model_quantized.onnx": "TypeError: Failed to parse URL from /models/Xenova/all-MiniLM-L6-v2/onnx/model_quantized.onnx"
@Kiranism were you able to resolve your issue? Im having the same problem. When I first call my embed function to get embeddings for storage it works. But when I use the embed function to get embeddings for a query it does not work and I get the same exact errors:
Unable to load from local path "/models/Xenova/all-MiniLM-L6-v2/tokenizer.json": "TypeError: Failed to parse URL from /models/Xenova/all-MiniLM-L6-v2/tokenizer.json" Unable to load from local path "/models/Xenova/all-MiniLM-L6-v2/tokenizer_config.json": "TypeError: Failed to parse URL from /models/Xenova/all-MiniLM-L6-v2/tokenizer_config.json" Unable to load from local path "/models/Xenova/all-MiniLM-L6-v2/config.json": "TypeError: Failed to parse URL from /models/Xenova/all-MiniLM-L6-v2/config.json" Unable to load from local path "/models/Xenova/all-MiniLM-L6-v2/onnx/model_quantized.onnx": "TypeError: Failed to parse URL from /models/Xenova/all-MiniLM-L6-v2/onnx/model_quantized.onnx"
Nevermind, I realized that this is being caused by the Next.js Edge runtime. My code that initiated the call to get and store embeddings in the DB was not using the edge runtime, but my code that called the function again for querying was using it. Removing the code for the edge runtime solved the issue.
The errors were caused by the edge runtime. I switched to node to fix them.
Describe the bug A clear and concise description of what the bug is.
How to reproduce Steps or a minimal working example to reproduce the behavior
I'm using Next.js 13 with a server-side approach, and everything was working fine until I changed the model. I deleted the lockfile and tried various solutions, but the strange thing is that I have a function responsible for embedding. The first time I call this function, it works perfectly fine. However, when I call the function again, I encounter errors.
One error is related to not being able to load the file from the local path. When I set the 'allowLocalModels' flag to 'false,' I get a different set of errors.
I actually want to download the model and use it locally, but I'm facing issues with the path not being accepted. I've tried numerous solutions, but I'm in need of guidance."
Expected behavior A clear and concise description of what you expected to happen.
Logs/screenshots If applicable, add logs/screenshots to help explain your problem.
Environment
Additional context Add any other context about the problem here.