-
Hello, I have a MBP with M3 max, but it's still very slow. It has been running for 10mins and still running. I double check the device is MPS, does this behaves correct? Thanks
-
### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a …
-
### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a …
-
### Describe the issue
With the new version 1.18 it seems that trying to use different InferenceSession using the same DirectML device, all threads remain stalled without giving any exception or er…
-
### Describe the issue
In case of any error during init with `webnn` the execution provider blocks and does not fallback to the next provider in chain. For example, DirectML API init in windows 10 …
-
### Describe the issue
I exported the following PyTorch model: https://pytorch.org/hub/pytorch_vision_googlenet using TorchDynamo (see result ONNX model attached in next section) and can run inferenc…
-
### Describe the issue
From what I understand DirectML is considered the default GPU backend on Windows systems. Nonetheless the "GPU" [build](https://onnxruntime.ai/docs/get-started/with-cpp.html#bu…
-
### First, confirm
- [X] I have read the [instruction](https://github.com/Gourieff/sd-webui-reactor/blob/main/README.md) carefully
- [X] I have searched the existing issues
- [X] I have updated the e…
-
I prepared a configuration file for converting whisper using directml but the process fails with an error.
**To Reproduce**
**Expected behavior**
It would be grate to use whisper with direc…
DimQ1 updated
2 weeks ago
-
Hi,
I followed the default installation process. but when run StableSwarmUI I receive error message
Some backends have errored on the server. Check the server logs for details.
I have MSI Alpha 15…