-
Title - I checked the launch option for "Force use of ZLUDA backend" and unchecked the box for "Use DirectML if no compatible GPU is detected" but when launching it after having installed HIP SDK and …
-
Hi so I've just built the latest version of onnxruntime-genai c example which created a phi3.exe by VS2022 that using cmake 3.26.3 excludes CUDA and uses DirectML. When I run this with various phi 3 …
-
Assume that I already followed Microsoft's instructions to [Enable PyTorch with DirectML on Windows](//learn.microsoft.com/en-us/windows/ai/directml/gpu-pytorch-windows) and the DirectML library loads…
-
Clone the repository and remove torch from requirements.txt (assuming you already have torch-directml installed) before pip installing it.
Delete or comment-out the line with torch.cuda.ipc_collect()…
-
### Checklist
- [ ] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a …
-
Hi
Thanks for the node.
When i use directml option in comfyui i got this error.
Do you schedule a version of the node for it ?
Regards
got prompt
Loading pipeline components...: 100%|█████████…
-
Running the default example doesn't work:
```text
Namespace(verbose=True, batch_size_for_cuda_graph=1, chat_template='', model='.\\example-models\\phi2-int4-directml')
Loading model...
Model loa…
-
**Motivation**
After installing ChaiNNer and ONNX Runtime, I don't see an option to run it on AMD/Intel GPUs, which I assume is because the [DirectML Execution Provider](https://onnxruntime.ai/docs/e…
-
### Describe the issue
I am testing Meta's Segment Anything (SAM) encoder model, both on Linux (CUDA) and on Windows (DirectML). When testing the model on the two platforms, using identical hardwar…
-
### Checklist
- [ ] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a …