crystian / ComfyUI-Crystools

A powerful set of tools for ComfyUI
MIT License
697 stars 33 forks source link

Crystools and AMD Zluda. #80

Closed Hakim3i closed 1 month ago

Hakim3i commented 1 month ago

Hey, I was able to make it work under amd Zluda by just deleting the GPU check . Can you please make it so it bypass the check if AMD Zluda is detected. This is the gpu.py modification I made.


`import torch
import pynvml
import comfy.model_management
from ..core import logger

class CGPUInfo:
    """
    This class is responsible for getting information from GPU (ONLY).
    """
    cuda = False
    pynvmlLoaded = False
    cudaAvailable = False
    torchDevice = 'cpu'
    cudaDevice = 'cpu'
    cudaDevicesFound = 0
    switchGPU = True
    switchVRAM = True
    switchTemperature = True
    gpus = []
    gpusUtilization = []
    gpusVRAM = []
    gpusTemperature = []

    def __init__(self):
        try:
            pynvml.nvmlInit()
            self.pynvmlLoaded = True
        except Exception as e:
            self.pynvmlLoaded = False
            logger.error('Could not init pynvml.' + str(e))

        if self.pynvmlLoaded and pynvml.nvmlDeviceGetCount() > 0:
            self.cudaDevicesFound = pynvml.nvmlDeviceGetCount()

            logger.info(f"GPU/s:")

            # for simulate multiple GPUs (for testing) interchange these comments:
            # for deviceIndex in range(3):
            #   deviceHandle = pynvml.nvmlDeviceGetHandleByIndex(0)
            gpuName = "Zluda"

            self.cuda = True
            # logger.info(f'NVIDIA Driver: {pynvml.nvmlSystemGetDriverVersion()}')
        else:
            logger.warn('No GPU with CUDA detected.')

        try:
            self.torchDevice = comfy.model_management.get_torch_device_name(comfy.model_management.get_torch_device())
        except Exception as e:
            logger.error('Could not pick default device.' + str(e))

        self.cudaDevice = 'cpu' if self.torchDevice == 'cpu' else 'cuda'
        self.cudaAvailable = torch.cuda.is_available()

        if self.cuda and self.cudaAvailable and self.torchDevice == 'cpu':
            logger.warn('CUDA is available, but torch is using CPU.')

    def getInfo(self):
        logger.debug('Getting GPUs info...')
        return self.gpus

    def getStatus(self):
        # logger.debug('CGPUInfo getStatus')
        gpuUtilization = -1
        gpuTemperature = -1
        vramUsed = -1
        vramTotal = -1
        vramPercent = -1

        gpuType = ''
        gpus = []

        if self.cudaDevice == 'cpu':
            gpuType = 'cpu'
            gpus.append({
                'gpu_utilization': 0,
                'gpu_temperature': 0,
                'vram_total': 0,
                'vram_used': 0,
                'vram_used_percent': 0,
            })
        else:
            gpuType = self.cudaDevice

            if self.pynvmlLoaded and self.cuda and self.cudaAvailable:
                # for simulate multiple GPUs (for testing) interchange these comments:
                # for deviceIndex in range(3):
                #   deviceHandle = pynvml.nvmlDeviceGetHandleByIndex(0)
                for deviceIndex in range(self.cudaDevicesFound):
                    deviceHandle = pynvml.nvmlDeviceGetHandleByIndex(deviceIndex)

                    gpuUtilization = 0
                    vramPercent = 0
                    vramUsed = 0
                    vramTotal = 0
                    gpuTemperature = 0

                    gpus.append({
                        'gpu_utilization': gpuUtilization,
                        'gpu_temperature': gpuTemperature,
                        'vram_total': vramTotal,
                        'vram_used': vramUsed,
                        'vram_used_percent': vramPercent,
                    })

        return {
            'device_type': gpuType,
            'gpus': gpus,
        }
`
crystian commented 1 month ago

Hi, please, make a PR for this. btw, does it work on windows?

PurplefinNeptuna commented 1 month ago

@crystian ZLUDA is Hacky way to run stable diffusion with AMD on Windows by tricking system to think they're using nvidia. but their hack is incomplete and many things are left out such as NVML for gpu monitoring.

ZLUDA can be detected easily because torch device name will append [ZLUDA]. probably just detect it and disable gpu monitoring for them. Screenshot 2024-07-23 090854

Hakim3i commented 1 month ago

Hi, please, make a PR for this. btw, does it work on windows?

Yes it does work am getting 3 it/s SDXL on a 7900 XTX. as @PurplefinNeptuna said its detected as [ZLUDA] am noway of being a developer just an IT guy, all I did was remove the checks, you need to add logic to bypass them.