-
Thank you for creating this much needed tool
Has there be any working on extending this tool to support AMD GPUs? Any sense of the level of effort needed to extended this to ROC+HIP.
-
MTL was not listed as supported platform in 2024Q1.
-
### Describe the bug
The Ladybird package assumes it doesn't need GPU so if you run with `--enable-gpu-painting` it will return `Configured to use GPU painter, but current platform does not have acce…
-
Just curious if this currently has GPU support for MacOS or if its planned down the road?
This project looks awesome and super useful!
Thank you!
-
I'm trying to run RoSA finetuning on Nvidia Quadro RTX 6000. The GPU architecture doesn't support bfloat16 so I tried to load the model in 4 bits (similar to the suggestion for Colab T4 GPU). The fine…
-
The DAT model can be very heavy, even on a 3090, when a lots of images needs to be upscalled. Is there any chance you could implements multi-gpu in order for a second card to be active ?
I have no …
-
Hi. I have a desktop that has 2x Tesla T4s, and it should be working because it has 32G VRAM in total, while other people reported to have a 27G VRAM usage when inferring. It should work but when infe…
D3lik updated
1 month ago
-
It's not an issue - I'm just here to say I love this.
I want to see this get to the point where it will run large transformers.
For acceleration, you might want to consider something like https…
-
### Question Validation
- [X] I have searched both the documentation and discord for an answer.
### Question
I want to use the semantic splitter from llamaindex for document segmentation. Is…
-
First steps:
- Nvidia GPUs support by writing a CUDA version of kernels packing and unpacking
- Add examples