-
### Cortex version
`cortex run` redownloads existing model multiple times
### Describe the Bug
2 issues (see screenshot)
- tinyllama:gguf is already downloaded
- `cortex run tinyllama:gguf` is su…
-
**Motivation**
After installing ChaiNNer and ONNX Runtime, I don't see an option to run it on AMD/Intel GPUs, which I assume is because the [DirectML Execution Provider](https://onnxruntime.ai/docs/e…
-
Are there any plans to support DirectML?
-
I found from onnxruntime-directml==1.15, mask outputs are diffrent and incorrect between directml and onnxruntime.
I used detectron2 Mask r-cnn.
In python onnxruntime-directml==1.14.1 is ok, and in…
-
My output is literally covered with a black square in the face. I followed installation steps, and models. And I didn't do the GPU acceleration, I just ran the run.py
![output](https://github.com/u…
-
Running the default example doesn't work:
```text
Namespace(verbose=True, batch_size_for_cuda_graph=1, chat_template='', model='.\\example-models\\phi2-int4-directml')
Loading model...
Model loa…
-
I am experiencing inference speed slowdown when running our test scripts with just the library alone or using our server.
The slowdown usually happens after half an hour.
### My System
- Int…
-
When attempting: python stable_diffusion.py --optimize, I get a "TypeError: z_(): incompatible function arguments" error for "Optimizing text_encoder". Note that "Optimizing vae_encoder", "Optimizing …
-
Hi, is there any plan with Pytorch-directml for Windows on ARM or ARM64 Linux(WSL 2)?
Only x86/x64 whls on Pypi for now: https://pypi.org/project/torch-directml/#files. And binaries included so i c…
ms300 updated
1 month ago
-
Hi I tried running this using https://github.com/lshqqytiger/stable-diffusion-webui-directml
Also this extension https://git.mmaker.moe/mmaker/sd-webui-tome
I get errors when I turn on ToMe
is this…