-
https://huggingface.co/happyme531/Stable-Diffusion-1.5-LCM-ONNX-RKNN2#%E5%B7%B2%E7%9F%A5%E9%97%AE%E9%A2%98
>1. 截至目前,使用最新版本的rknn-toolkit2 2.2.0版本转换的模型仍然存在极其严重的精度损失!即使使用的是fp16数据类型。如图,上方是使用onnx模型推理的结果…
-
Tasks that have been identified and scheduled:
+ Fine-tuning support for Diffusers version models
+ Adaptation for CPU / NPU inference frameworks (e.g., Huawei, Intel devices)
+ ComfyUI adaptat…
-
### System Information
OpenCV version: 4.10.0
Operating System / Platform: Ubuntu 22.04
Compiler & compiler version: Java JDK 11
### Detailed description
== OnnxModel from ==
https://github.com/…
-
I try to run whisper.cpp with CANN in Ascend NPU 310P3, my cann version is 8.0
I follow this cmd to compile:
mkdir build
cd build
cmake .. -D GGML_CANN=on
make -j
and infer command:
…
-
Now that the Chromium prototype supports passing `"npu"` in the context creation options we need samples that exercise this code path.
-
Hi,
could you please provide an example on how to select intel NPU with directML on python with torch_directml (and ExecutionProvider for onnxRT)?
Thanks!
-
Hi Zifeng.
We had a discussion in the issue you opened [here](https://gitlab.freedesktop.org/mesa/mesa/-/issues/10974).
Since it has been a month without any updates from you, I decided to repos…
-
Snapdragon X Elite and Core Ultra have NPUs built-in. However, it seems that the capabilities of the NPU can only be utilized with software that supports it. Does LM Studio have any plans to support t…
-
### Model Series
Qwen2.5
### What are the models used?
qwen2.5-72b-instruct
### What is the scenario where the problem happened?
qwen2.5-72b-instruct 在昇腾910b上推理异常
### Is this badcase known and c…
-
CAUTION: The operator 'aten::_transformer_encoder_layer_fwd' is not currently supported on the NPU backend and will fall back to run on the CPU. This may have performance implications.
torch 2.1.0…