Closed alk0 closed 1 day ago
(sigh) Look, I know I'm weird - my GPU and CPU are from Victorian era, and I'm on Linux. According to https://caniuse.com/webgpu WebGPU is not even enabled in Chrome engine on Linux by default, and I've no idea how to check if it would be available within that iframe adapter from under Obsidian that you use to run it. The error seems to originate from onnxruntime-web called by xenova/transformers (although I'm not 100% sure, given my lack of expertise in all this JS stuff).
@alk0 in the settings there should be a gpu_batch_size
setting. Set this to 0
and it should prevent the GPU from being used 🌴
Ah, it's slightly different now - it tries to use the CPU route indeed ("[Transformers] Using CPU"), but still fails with the same error ("Error: no available backend found. ERR: [wasm] Error: WebAssembly SIMD is not supported in the current environment."), see the log.
plugin:smart-connections:12574 Loading Smart Connections v2... plugin:smart-connections:5870 opts {global_ref: global, env_path: '', env_data_dir: '.smart-connections', smart_env_settings: {…}, smart_chunks_class: ƒ, …} plugin:smart-connections:5959 smart_env_settings SmartEnvSettings {env: _SmartEnv, opts: {…}, _fs: SmartFs, _settings: {…}, _saved: true} plugin:smart-connections:10996 {model_key: 'TaylorAI/bge-micro-v2', batch_size: 1, dims: 384, max_tokens: 512, name: 'BGE-micro-v2', …} plugin:smart-connections:12085 loading iframe adapter {model_key: 'TaylorAI/bge-micro-v2', batch_size: 1, dims: 384, max_tokens: 512, name: 'BGE-micro-v2', …} VM211 about:srcdoc:306 init VM211 about:srcdoc:309 load {model_key: 'TaylorAI/bge-micro-v2', batch_size: 1, dims: 384, max_tokens: 512, name: 'BGE-micro-v2', …} VM211 about:srcdoc:132 {model_key: 'TaylorAI/bge-micro-v2', batch_size: 1, dims: 384, max_tokens: 512, name: 'BGE-micro-v2', …} VM211 about:srcdoc:240 [Transformers] Using CPU transformers@3.0.0-alpha.13:175 dtype not specified for "model". Using the default dtype (q8) for this device (wasm). (anonymous) @ transformers@3.0.0-alpha.13:175 (anonymous) @ transformers@3.0.0-alpha.13:175 E @ transformers@3.0.0-alpha.13:175 from_pretrained @ transformers@3.0.0-alpha.13:175 await in from_pretrained (async) from_pretrained @ transformers@3.0.0-alpha.13:175 await in from_pretrained (async) (anonymous) @ transformers@3.0.0-alpha.13:187 G @ transformers@3.0.0-alpha.13:187 load @ VM211 about:srcdoc:243 await in load (async) load @ VM211 about:srcdoc:150 processMessage @ VM211 about:srcdoc:310 (anonymous) @ VM211 about:srcdoc:338 postMessage (async) eval @ plugin:smart-connections:12124 _send_message @ plugin:smart-connections:12121 load @ plugin:smart-connections:12108 await in load (async) load @ plugin:smart-connections:11011 load_smart_embed @ plugin:smart-connections:9350 init @ plugin:smart-connections:9338 await in init (async) init @ plugin:smart-connections:10459 init @ plugin:smart-connections:8847 init_collections @ plugin:smart-connections:5971 init @ plugin:smart-connections:5961 VM211 about:srcdoc:154 Error loading model TaylorAI/bge-micro-v2: Error: no available backend found. ERR: [wasm] Error: WebAssembly SIMD is not supported in the current environment. at l (transformers@3.0.0-alpha.13:100:1798) at async e.create (transformers@3.0.0-alpha.13:100:18017) at async f (transformers@3.0.0-alpha.13:151:1428) at async transformers@3.0.0-alpha.13:175:14907 at async Promise.all (index 0) at async E (transformers@3.0.0-alpha.13:175:12934) at async Promise.all (index 0) at async U.from_pretrained (transformers@3.0.0-alpha.13:175:22072) at async uo.from_pretrained (transformers@3.0.0-alpha.13:175:55855) at async Promise.all (index 1) load @ VM211 about:srcdoc:154 await in load (async) processMessage @ VM211 about:srcdoc:310 (anonymous) @ VM211 about:srcdoc:338 postMessage (async) eval @ plugin:smart-connections:12124 _send_message @ plugin:smart-connections:12121 load @ plugin:smart-connections:12108 await in load (async) load @ plugin:smart-connections:11011 load_smart_embed @ plugin:smart-connections:9350 init @ plugin:smart-connections:9338 await in init (async) init @ plugin:smart-connections:10459 init @ plugin:smart-connections:8847 init_collections @ plugin:smart-connections:5971 init @ plugin:smart-connections:5961 plugin:smart-connections:12131 model loaded plugin:smart-connections:9340 SmartEmbed not loaded for smart_blocks. Continuing without embedding capabilities. plugin:smart-connections:5979 loading collection smart_sources plugin:smart-connections:9094 Loading smart_sources: 1077 items plugin:smart-connections:9101 Loaded smart_sources in 5204ms plugin:smart-connections:10576 Smart Connections: Processing import queue: 2 items plugin:smart-connections:10584 Smart Connections: Processed import queue in 202ms plugin:smart-connections:9501 Smart Connections: No active embedding model for smart_blocks, skipping embedding plugin:smart-connections:9506 Processing smart_sources embed queue: 2 items VM211 about:srcdoc:328 Error processing message: Error: Model not loaded at processMessage (VM211 about:srcdoc:315:17) at VM211 about:srcdoc:338:44 processMessage @ VM211 about:srcdoc:328 (anonymous) @ VM211 about:srcdoc:338 postMessage (async) eval @ plugin:smart-connections:12124 _send_message @ plugin:smart-connections:12121 embed_batch @ plugin:smart-connections:12150 embed_batch @ plugin:smart-connections:11042 process_embed_queue @ plugin:smart-connections:9513 await in process_embed_queue (async) process_import_queue @ plugin:smart-connections:10588 await in process_import_queue (async) process_load_queue @ plugin:smart-connections:10571 plugin:smart-connections:12136 Uncaught (in promise) Error: Model not loaded at SmartEmbedTransformersIframeAdapter._handle_message (plugin:smart-connections:12136:39) _handle_message @ plugin:smart-connections:12136
Was it working for you before?
And to follow up with your questions:
1) no, GPU is not a requirement, and the adapter is designed to automatically use CPU unless a GPU is detected. Of course that seems to be failing in this particular case.
2) all of the components of Smart Connections are open-source and utilize an adapter pattern that makes adding additional platforms for embeddings and other models easy. Since I only have a limited time, I design the components this way so that the community can contribute adapters that integrate with the growing number of available platforms both locally and on the cloud. Here's the link to the embedding model component https://github.com/brianpetro/jsbrains/tree/main/smart-embed-model
🌴
Note: may be related to https://github.com/brianpetro/obsidian-smart-connections/issues/750#issuecomment-2323118066
Yes, 2.1.95 was working and is working now.
OK, I see... 1) I have a feeling that it can be fixed once the cause is found, 2) I totally get that you're not omnipotent and that you have other things to do in your free time other than messing with the plugin :) I'll look into the code, anyway, even if I'm not a JS guy. Thank you for your patience!
Yes, 2.1.95 was working and is working now.
To be clear, you reverted version and 2.1.95
is working?
If so, when linux is detected I might be able to automatically use the older version of the huggingface transformers.js library which was still being used in 2.1.95
.
Thanks for your help in figuring out the issue 😊 🌴
To be clear, you reverted version and 2.1.95 is working?
Yes, this is the case. Just to be very clear (it never hurts): I backed up the 2.1.95 before upgrading - two folders: 1) plugin and its settings, 2) .smart-connections
folder with embeddings. Then, after updating to .96 and playing with it, I closed Obsidian, removed both .96 folders to be sure there are no conflicts of any sort, copied back both .95 folders, reopened Obsidian.
@alk0 I added a toggle to utilize the legacy transformers.js
v2 instead of the newer v3, which supports GPUs.
If the local embeddings were working pre-2.1.96
then toggling this on should fix the issues in the latest version (see screenshot) 🌴
@alk0 I added a toggle to utilize the legacy
transformers.js
v2 instead of the newer v3, which supports GPUs.If the local embeddings were working pre-
2.1.96
then toggling this on should fix the issues in the latest version (see screenshot) 🌴
I don't see that option Brian, I've searched for updates with no luck.
@AsfixGroup I just checked with a test instance, and v2.1.99
is available via the Obsidian Community plugins index. You may need to restart Obsidian to get it to appear. And make sure you click "Check for Updates" in the Obsidian community plugins tab 🌴
@alk0 I added a toggle to utilize the legacy
transformers.js
v2 instead of the newer v3, which supports GPUs.
Great! It seems to work for me now (2.1.99) with this toggle turned to legacy (v2) transformers. Although I'm seeing transformers spamming the log with "Uncaught TypeError: Cannot read properties of null (reading 'on')" - - same as in in this comment. Don't know how critical it is. Again, everything else "seems to work" for me now.
@alk0 I'm happy to hear that it's working again 😊
And those specific errors at startup are typical for the legacy version of transformers and shouldn't cause any issues 🌴
Local embedding models don't work without a supported GPU anymore (2.1.96)
See the log below:
VM211 throws "WebAssembly SIMD is not supported in the current environment", then smart-connections says "model loaded" (it is not), then later VM211 throws "Error: Model not loaded" when trying to access it, and this is "Uncaught (in promise)" by smart-connections.
Questions:
partial log (click to fold/unfold)