Open xhy2008 opened 8 months ago
how did you make --novram work ??? it doesnt work when you use it , reverts to normal vram. We just tried it with a friends 2200g yesterday, lowvram was just crashing his pc but novram just uses normalvram or as if we didn't enter a ram argument BUT with he was able to generate with lcm lora's with speeds are 5-6 sec / it / (we tried up to 20 steps) the only problem with this method is you cant do a second run, you have to restart comfyui.
I can't even generate a image. I got a lot of out of memory errors at about 5 step.
I updated the latest code. No matter what args I use,the program will exit after 1 step of sampling,without any error messages.(Forced NOVRAM on directml) Without NOVRAM I will run out of memory.
The way to force NOVRAM: 1.find the block in comfy/model_management.py:
if args.directml is not None: import torch_directml directml_enabled = True device_index = args.directml if device_index < 0: directml_device = torch_directml.device() else: directml_device = torch_directml.device(device_i ndex) print("Using directml with device:", torch_directml. device_name(device_index))
d by default.
2.Change the line "lowvram_available=False" into "lowvram_available = True"
Then it seemed that --novram can work. And then it crushes........π
I am using the repository and a111's webui on the same machine with directml. At first,it always use NORMAL_VRAM though I have passed --no_vram. So I changed the code in comfy/model_management.py,made it work. However,with the same params(NO_VRAM in comfy,LOW_VRAM in webui) and the same prompt,sampler,model,comfy is even slower than webui(60s/it on comfy,15s/it on webui),even the virtual environment is the same one. My computer is not very powerful.π When I used pytorchcrossattention,the program exited itself after 1 step. bf16 on my device will cause directml errors. And I can't use fp8 because pytorch has no attribute float8**.(I tried that because I have only 2GB VRAMπ) But on a 2GB Invida card(GTX750Ti) with xformers,it worked very well as if it was RTX4090π. After some times,I cannot generate anything on my AMD card. I don't know what happened and I'm not very good in Python. Is there anything I can do?