Closed sugarforever closed 4 months ago
I get the same issue, even when trying to pull in from a local checkout of onnx:
import { env, AutoModelForCausalLM, AutoTokenizer } from '@xenova/transformers'
env.backends.onnx.wasm.wasmPaths = '/onnxruntime-web/'
env.allowRemoteModels = false
env.allowLocalModels = true
const model_id = '../model';
const tokenizer = await AutoTokenizer.from_pretrained(model_id, {
legacy: true
})
I have copied the contents of node_modules/onnxruntime-web/dist/
to public
and it's trying to access a ort-wasm-simd-threaded.jsep.mjs
file which does not exist in onnxruntime-web
This is because the demo uses an unreleased version of onnxruntime-web v1.18.0, which I have mentioned a few times when I've linked to the source code. When it is released, I will update the source code so that it works correctly. Thanks for understanding!
This is because the demo uses an unreleased version of onnxruntime-web v1.18.0, which I have mentioned a few times when I've linked to the source code. When it is released, I will update the source code so that it works correctly. Thanks for understanding!
Thanks for the feedback. Looking forward to the release.
System Info
@xenova/transformers 3.0.0-alpha.0 Chrome: Version 124.0.6367.93 (Official Build) (arm64) OS: macOS 14.4.1 (23E224)
Environment/Platform
Description
I ran
pnpm run dev
in the examplewebgpt-chat
. I can download the model on http://localhost:5173. But it's not ready for chat due to the error reported in the console:May I query if any setting required to make it work?
Btw, I can chat with the model on https://huggingface.co/spaces/Xenova/experimental-phi3-webgpu
Reproduction
Load Model
button