chaiNNer-org / chaiNNer

A node-based image processing GUI aimed at making chaining image processing tasks easy and customizable. Born as an AI upscaling application, chaiNNer has grown into an extremely flexible and powerful programmatic image processing application.
https://chaiNNer.app
GNU General Public License v3.0
4.65k stars 283 forks source link

Detect AMD GPU on Linux, then install the correct PyTorch version. #2427

Open ricperry opened 10 months ago

ricperry commented 10 months ago

Motivation It's frustrating continuing to see devs select the Nvidia-only version of PyTorch when there are very good AMD/ROCm versions.

Description When an AMD GPU is the only GPU device, link to the PyTorch-ROCm-5.71 package and associated dependencies.

joeyballentine commented 10 months ago

Do you know of a cross-platform way to detect if the user's GPU is AMD? I'm not aware of any personally.

And isn't ROCm linux-only?

YellowRoseCx commented 7 months ago

Do you know of a cross-platform way to detect if the user's GPU is AMD? I'm not aware of any personally.

And isn't ROCm linux-only?

I've looked over the files for a little bit and a way to detect if the GPU is AMD on linux and install appropriate PyTorch could look something like this: chaiNNer/backend/src/packages/chaiNNer_pytorch/__init__.py

def get_amdgpu_info():
    import re, subprocess, sys
    amd_gpu_available = False
    if sys.platform == "linux":
        try:
            lspci_output = subprocess.check_output(["lspci"]).decode("utf-8")
            gpu_info = "\n".join(re.findall(r"VGA.*|Display.*", lspci_output))
            if gpu_info:
                if "NVIDIA" not in gpu_info and ("Navi 2" in gpu_info or "Navi 3" in gpu_info):
                    amd_gpu_available = True
            return amd_gpu_available 
        except subprocess.CalledProcessError:
            return False
    return False

def get_pytorch():
    amd_gpu_available = get_amdgpu_info()
    if is_arm_mac:
        return [
            Dependency(
                display_name="PyTorch",
                pypi_name="torch",
                version="2.1.2",
                size_estimate=55.8 * MB,
                auto_update=True,
            ),
            Dependency(
                display_name="TorchVision",
                pypi_name="torchvision",
                version="0.16.2",
                size_estimate=1.3 * MB,
                auto_update=True,
            ),
        ]
    else:
        if amd_gpu_available:
            return [
                Dependency(
                    display_name="PyTorch",
                    pypi_name="torch",
                    version="2.2.2+rocm5.7",
                    size_estimate=1.6 * GB,  
                    extra_index_url="https://download.pytorch.org/whl/rocm5.7",
                    auto_update=True,
                ),
                Dependency(
                    display_name="TorchVision",
                    pypi_name="torchvision",
                    version="0.17.2+rocm5.7", 
                    size_estimate=63 * MB,  
                    extra_index_url="https://download.pytorch.org/whl/rocm5.7",
                    auto_update=True,
                ),
            ]
        elif nvidia_is_available:
            return [
                Dependency(
                    display_name="PyTorch",
                    pypi_name="torch",
                    version="2.1.2+cu121" if nvidia_is_available else "2.1.2",
                    size_estimate=2 * GB if nvidia_is_available else 140 * MB,
                    extra_index_url=(
                        "https://download.pytorch.org/whl/cu121"
                        if nvidia_is_available
                        else "https://download.pytorch.org/whl/cpu"
                    ),
                    auto_update=not nvidia_is_available,  # Too large to auto-update
                ),
                Dependency(
                    display_name="TorchVision",
                    pypi_name="torchvision",
                    version="0.16.2+cu121" if nvidia_is_available else "0.16.2",
                    size_estimate=2 * MB if nvidia_is_available else 800 * KB,
                    extra_index_url=(
                        "https://download.pytorch.org/whl/cu121"
                        if nvidia_is_available
                        else "https://download.pytorch.org/whl/cpu"
                    ),
                    auto_update=not nvidia_is_available,  # Needs to match PyTorch version
                ),
            ]

ROCm is on Windows and Linux but PyTorch-rocm is currently only distributed for Linux. However, a PyTorch-rocm build for Windows will likely be available in a month or two (best guess) as commits to the to the relevant ROCm repositories that will enable PyTorch support for Windows show signs of it being released in the next big update which is due soon.

joeyballentine commented 7 months ago

Thanks for this. Would you mind making a PR with the changes?