comfyanonymous / ComfyUI

The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
https://www.comfy.org/
GNU General Public License v3.0
54.85k stars 5.79k forks source link

Very SLOW and UNSTABLE on directml #2721

Open xhy2008 opened 8 months ago

xhy2008 commented 8 months ago

I am using the repository and a111's webui on the same machine with directml. At first,it always use NORMAL_VRAM though I have passed --no_vram. So I changed the code in comfy/model_management.py,made it work. However,with the same params(NO_VRAM in comfy,LOW_VRAM in webui) and the same prompt,sampler,model,comfy is even slower than webui(60s/it on comfy,15s/it on webui),even the virtual environment is the same one. My computer is not very powerful.πŸ˜„ When I used pytorchcrossattention,the program exited itself after 1 step. bf16 on my device will cause directml errors. And I can't use fp8 because pytorch has no attribute float8**.(I tried that because I have only 2GB VRAMπŸ˜‚) But on a 2GB Invida card(GTX750Ti) with xformers,it worked very well as if it was RTX4090πŸ‘. After some times,I cannot generate anything on my AMD card. I don't know what happened and I'm not very good in Python. Is there anything I can do?

patientx commented 8 months ago

how did you make --novram work ??? it doesnt work when you use it , reverts to normal vram. We just tried it with a friends 2200g yesterday, lowvram was just crashing his pc but novram just uses normalvram or as if we didn't enter a ram argument BUT with he was able to generate with lcm lora's with speeds are 5-6 sec / it / (we tried up to 20 steps) the only problem with this method is you cant do a second run, you have to restart comfyui.

xhy2008 commented 8 months ago

I can't even generate a image. I got a lot of out of memory errors at about 5 step.

xhy2008 commented 8 months ago

I updated the latest code. No matter what args I use,the program will exit after 1 step of sampling,without any error messages.(Forced NOVRAM on directml) Without NOVRAM I will run out of memory.

xhy2008 commented 8 months ago

The way to force NOVRAM: 1.find the block in comfy/model_management.py:

if args.directml is not None: import torch_directml directml_enabled = True device_index = args.directml if device_index < 0: directml_device = torch_directml.device() else: directml_device = torch_directml.device(device_i ndex) print("Using directml with device:", torch_directml. device_name(device_index))

torch_directml.disable_tiled_resources(True) lowvram_available = False #TODO: need to find a way to get free memory in directml before this can be enable

d by default.

2.Change the line "lowvram_available=False" into "lowvram_available = True"

Then it seemed that --novram can work. And then it crushes........πŸ˜‚