Open RobotRoss opened 1 year ago
Do you know how compatible ROCm versions are with newer Linux distros? The PyTorch version we are targeting only support ROCm 4.2 (May 2021), so newer distros aren't officially supported.
Also, we assume that PyTorch GPU support is CUDA only right now, so we'll need to do some refactoring to throughout the frontend.
It should be mentioned that ROCm is Linux only. This isn't really a problem, since improving GPU support for any platform is a good thing, but I wanted to mention it because it wasn't before.
@RunDevelopment it maybe useful: for Windows there is another alternative as ONNX-DirectML like https://github.com/huggingface/diffusers/compare/main...harishanand95:diffusers:dml
People have already gotten rocm to work. Just use system python and install the rocm version to your system
@RobotRoss What AMD GPU do you have? The list of GPUs supported by ROCm is small. It's pretty much just their productivity GPUs, so RDNA cards won't be able to use it.
@RobotRoss What AMD GPU do you have? The list of GPUs supported by ROCm is small. It's pretty much just their productivity GPUs, so RDNA cards won't be able to use it.
RX 6900 XT - While it's not on the official supported list (only the Pro GPUs are), ROCm does work with it, as has been verified by myself and others - supposedly, the reason why the supported list is so short is because of the requirements for extensive testing; however that doesn't mean that ROCm won't work with consumer GPUs.
In that case, try using it with system python + installing rocm pytorch to your system. It should just work
In that case, try using it with system python + installing rocm pytorch to your system. It should just work
Unfortunately attempting to use system python will cause ChaiNNer to crash on start - logfile main.log
System Python:
Python 3.11.0 (main, Oct 24 2022, 00:00:00) [GCC 12.2.1 20220819 (Red Hat 12.2.1-2)] on linux
Edit: Fixed, python3-devel
package was missing. Might be worth having a warning about that missing package when using system python (The error thrown right now is generic and quite unhelpful)
That's a known issue. Its supposed to warn you about that but i accidentally made that not happen a while back
Okay, I've had some time to tinker, and while, yes, it is possible to get Chainner to render using ROCm, I disagree with the characterization of "it just works" - It's not a fault within Chainner itself, but instead due to the dependencies required by Chainner not supporting modern Python versions, meaning that I couldn't actually get a full install but was able to get far enough I could get an output - adding ROCm support to the Python portable that ships with Chainner would be preferable.
You could just use an older python version. I'm pretty sure PyTorch has a ROCm version for 3.9. Yes it would probably be preferrable to give the option in the UI to install the rocm version but I don't think we'll be doing that relatively soon
Motivation Some of the models I have will not convert over to NCNN due to unsupported opsets. I have an AMD GPU so using Pytorch is infuriatingly slow.
Description Add support for Pytorch's ROCm workflow, which enables GPU acceleration on AMD GPUs https://pytorch.org/blog/pytorch-for-amd-rocm-platform-now-available-as-python-package/ https://news.ycombinator.com/item?id=32679671
Alternatives Use of NCNN, however this does not work on many Pytorch models.