Closed Harbitos closed 1 day ago
Since everything works well in zluda, I think that the inpaint problem is not related to my PC.
Its not so simple. When you use ZLUDA then SD downloads different packages so your environment is set up differently. It can be totally related to your PC. You said you deleted every package with pip freeze commands ? If you did it correctly then i doubt there is anything we can do about it. Can you show your Sys Info ? Its in Settings/sysinfo
*MAIN: I need to be able to use inpaint with lora and do it in pictures from 20482048.** It should work, TheFerumn on RX570 4GB has directml inpaint running even with lora.
Yes but never on such a big resolutions due to 4GB.
Can you show your Sys Info ? Its in Settings/sysinfo
SD settings/system, maybe it: sysinfo from Harbit.txt
Can you set this in settings/optimizations and check if its better ? It should be enabled as default but you have it disabled
Also check setting below although you might want to test without it first
I checked your installed packages. Looks like you didn't deleted them as i asked. I doubt it would download for you newer versions than i have. You either not switched python versions correctly or not switched backend correctly. I am not sure what happened and which of this packages are responsible for so i suggest you to manually uninstall all of them from the list below and option 1: let SD install them automatically during the launch, option 2: install them manually 1 by 1. This is my versions of packages:
"aiohappyeyeballs==2.4.0",
"aiohttp==3.10.6",
"albucore==Not istalled
"albumentations==1.4.3",
"charset-normalizer==3.3.2",
"eval_type_backport==Not istalled
"fsspec==2024.9.0",
"ftfy==6.2.3",
"imageio==2.35.1",
"jax==0.4.33",
"jaxlib==0.4.33",
"jsonschema-specifications==2023.12.1",
"lightning-utilities==0.11.7",
"networkx==3.3",
"olive-ai==0.6.2",
"opt-einsum==3.3.0",
"optimum==1.22.0",
"orjson==3.10.7",
"propcache==Not istalled
"pyparsing==3.1.4",
"python-multipart==0.0.10",
"pywin32==306",
"reportlab==4.2.2",
"rich==13.8.1",
"ruff==0.6.7",
"sounddevice==0.5.0",
"SQLAlchemy==2.0.35",
"termcolor==2.4.0",
"timm==1.0.9",
"tokenizers==0.19.1",
"torchmetrics==1.4.2",
"transformers==4.44.2",
"trimesh==4.4.9",
"uvicorn==0.30.6",
"yarl==1.12.1",
And this is yours:
"aiohappyeyeballs==2.4.3",
"aiohttp==3.10.10",
"albucore==0.0.17",
"albumentations==1.4.18",
"charset-normalizer==3.4.0",
"eval_type_backport==0.2.0",
"fsspec==2024.10.0",
"ftfy==6.3.0",
"imageio==2.36.0",
"jax==0.4.34",
"jaxlib==0.4.34",
"jsonschema-specifications==2024.10.1",
"lightning-utilities==0.11.8",
"networkx==3.4.2",
"olive-ai==0.7.0",
"opt_einsum==3.4.0",
"optimum==1.23.1",
"orjson==3.10.9",
"propcache==0.2.0",
"pyparsing==3.2.0",
"python-multipart==0.0.12",
"pywin32==308",
"reportlab==4.2.5",
"rich==13.9.2",
"ruff==0.7.0",
"sounddevice==0.5.1",
"SQLAlchemy==2.0.36",
"termcolor==2.5.0",
"timm==1.0.11",
"tokenizers==0.20.1",
"torchmetrics==1.5.0",
"transformers==4.45.2",
"trimesh==4.5.0",
"uvicorn==0.32.0",
"yarl==1.15.5",
Most of them are probably not important and not related to your issues but i suggest you to start from downgrading optimum, torchmetrics, transformers. Before all of that use pip cache purge
and check which version it actually installed for you or do it manually and write version you want.
I checked your installed packages. Looks like you didn't deleted them as i asked. I doubt it would download for you newer versions than i have. You either not switched python versions correctly or not switched backend correctly. I am not sure what happened and which of this packages are responsible for so i suggest you to manually uninstall all of them from the list below and option 1: let SD install them automatically during the launch, option 2: install them manually 1 by 1. This is my versions of packages:
Please write me all the commands in turn, I don't understand the screenshot that you sent a little, maybe something didn't work for me.
Can you set this in settings/optimizations and check if its better ? It should be enabled as default but you have it disabled Also check setting below although you might want to test without it first
I'll try it now
Please write me all the commands in turn, I don't understand the screenshot that you sent a little, maybe something didn't work for me. By the way, do you have them installed?
I send you my and your list of packages which have different versions. Downgrade them to the versions from the first list. Open your CMD go into venv/scripts/
then type activate
and use command pip uninstall package1 package2 package3
etc. to uninstall them and then use command pip install package name==version of package
. For example pip uninstall transformers
and then pip install transformers==4.44.2
I send you my and your list of packages which have different versions. Downgrade them to the versions from the first list. Open your CMD go into
venv/scripts/
then typeactivate
and use commandpip uninstall package1 package2 package3
etc. to uninstall them and then use commandpip install package name==version of package
. For examplepip uninstall transformers
and thenpip install transformers==4.44.2
I didn't understand exactly how to do it. Can you send me the whole list of all the commands? If anything, I haven't changed or reinstalled anything since I sent you sysinfo.
Can you set this in settings/optimizations and check if its better ? It should be enabled as default but you have it disabled Also check setting below although you might want to test without it first
I did it with the --no-half --no-half-vae
arguments, but my windows showed different memory errors every time and the browser crashed.
I send you my and your list of packages which have different versions. Downgrade them to the versions from the first list. Open your CMD go into
venv/scripts/
then typeactivate
and use commandpip uninstall package1 package2 package3
etc. to uninstall them and then use commandpip install package name==version of package
. For examplepip uninstall transformers
and thenpip install transformers==4.44.2
I didn't understand exactly how to do it. Can you send me the whole list of all the commands?
I don't know how i can explain it to you in a different or more detailed way...
I did it with the
--no-half --no-half-vae
arguments, but my windows showed different memory errors every time and the browser crashed.
It happens when you don't have enough memory. What resolution you used ? It shouldn't crash on lower resolutions...
I checked your installed packages. Looks like you didn't deleted them as i asked. I doubt it would download for you newer versions than i have. You either not switched python versions correctly or not switched backend correctly. I am not sure what happened and which of this packages are responsible for so i suggest you to manually uninstall all of them from the list below and option 1: let SD install them automatically during the launch, option 2: install them manually 1 by 1. Most of them are probably not important and not related to your issues but i suggest you to start from downgrading optimum, torchmetrics, transformers. Before all of that use
pip cache purge
and check which version it actually installed for you or do it manually and write version you want.
I deleted them all, and installed the ones you have. It didn't help.
I even saved the command that I wrote after deleting it:
pip install aiohappyeyeballs==2.4.0 aiohttp==3.10.6 albumentations==1.4.3 charset-normalizer==3.3.2 fsspec==2024.9.0 ftfy==6.2.3 imageio==2.35.1 jax==0.4.33 jaxlib==0.4.33 jsonschema-specifications==2023.12.1 lightning-utilities==0.11.7 networkx==3.3 olive-ai==0.6.2 opt-einsum==3.3.0 optimum==1.22.0 orjson==3.10.7 pyparsing==3.1.4 python-multipart==0.0.10 pywin32==306 reportlab==4.2.2 rich==13.8.1 ruff==0.6.7 sounddevice==0.5.0 SQLAlchemy==2.0.35 termcolor==2.4.0 timm==1.0.9 tokenizers==0.19.1 torchmetrics==1.4.2 transformers==4.44.2 trimesh==4.4.9 uvicorn==0.30.6 yarl==1.12.1
Here is sysinfo if necessary: new sysinfo from Harbit.txt
And at the end of the installation there was this:
What if you need to put more than 1024mb gpu weights?
By the way, [Low VRAM Warning]
appears when I set 1024mb.
What if you need to put more than 1024mb gpu weights? By the way,
[Low VRAM Warning]
appears when I set 1024mb.
your PC will explode
Cmon. I don't understand such a questions from somebody using AI where everything you do is learning and experimenting with different settings. Just check what will happen and you will know instead of asking. You can always google or use chat GPT to check what each setting do. I don't use it on 4GB GPU. Its more of a FLUX model setting.
It happens when you don't have enough memory. What resolution you used ? It shouldn't crash on lower resolutions...
I set the resolution to 512*512, it showed me 7%, but then also press any key to continue...
But there were no windows errors.
Cmon. I don't understand such a questions from somebody using AI where everything you do is learning and experimenting with different settings. Just check what will happen and you will know instead of asking. You can always google or use chat GPT to check what each setting do. I don't use it on 4GB GPU. Its more of a FLUX model setting.
okay
I created an issue so that @lshqqytiger would notice this and suggest something.
Ok, to sum all up. You had blue screens while using ZLUDA, you obviously has some memory issues, even games need you to restart your GPU drivers. I told you before to check your GPU temperatures but i guess you ignored it. As far as i see your GPU might be defective or something. Its old hardware anyway. Its really hard to tell whats wrong since you literally tried everything i am using just fine... I am going to be honest. I am shocked you are able to generate 2048 images on this GPU xD What we pushed from it is big enough now its time to upgrade for something better :D
you obviously has some memory issues, even games need you to restart your GPU drivers
No! Everything is fine with all modern powerful games! For example: Forza Horizon 5 with medium graphics, Minecraft 1.20.1 with powerful shaders (almost).
You had blue screens while using ZLUDA
And there were no blue screens with ZLUDA!
I told you before to check your GPU temperatures but i guess you ignored it.
The CPU temperature when working SD — 55 degrees. The load of the GPU when working SD — 79 degrees. Yeah, of course I haven't cleaned the PC from dust for a long time, all the coolers are dusty, the table is covered with dust because of the PC, one cooler of the system unit needs to be lubricated, quietly farts.
As far as i see your GPU might be defective or something
I don't know, I don't know, the processor pulls inpaint to 2048 2048. I just have a problem with lora on directml. I can switch to zluda and use inpaint normally, but I can use zluda to make inpaint images up to 1280 1024, if I put more, then there won't be enough memory. Maybe I should try experimenting on zluda again, maybe I can optimize everything there and make pictures of huge resolutions like in directml.
I think nothing has to do with arguments, I wrote --directml --upcast-sampling --opt-sub-quad-attention
, also 7% and press any key to continue
.
I've decided to check something out here... And I found directml >:) I looked at the GPT Chat, and I can try to put a 40GB swap file.
YEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEES
THE PROBLEM IS SOLVED! I've been suffering for a week!
The solution to the problem is to set the swap file to 40GB. (or more)
I realized this because in the windows log I noticed that python.exe allocates 39GB of memory, so I put 40GB of swap files. Probably the windows swap file could not automatically become very large.
Yes but never on such a big resolutions due to 4GB.
@TheFerumn what kind of swap file do you have? I'm just curious.
Thank you for taking the time and trying to help me.
By the way, take a note to yourself, if someone does not write an error in the console, and writes press any key to continue...
, remember about the swap file.
Well. I knew its something wrong with your memory but you said you have free space on your drive and already 20GB allocated for the memory. I assumed you have 20GB min ready to use and some more free space in backup to auto-allocate.
This is my settings. Never had issues but i haven't tried to generate in 2048x2048. BTW. You said you tried to generate 512x512 with LORA so i don't understand how you could be out of memory but at the same time be able to generate 2048x2048 without inpainting. Is inpainting really using so much memory ?!?! Well at least we have been able to figure out its not Forge issue but your PCs
able to generate 2048x2048 without inpainting
Correction: 2048* 2048 could also do inpaint without Lora. Inpainting with Lora takes up a lot of memory only for me.
I have another problem that has always been there.: After any generation, it shows me, then it doesn't, the finished picture in the interface. That is, the image was generated, saved in the outputs folder, but it is not always displayed here:
I've already tried using ChatGPT, nothing helped. On the Internet, they said Add number to filename when saving
is checked. The --no-gradio-queue
argument didn't seem to help either.
Or maybe I deleted a couple of files in the outputs folder, and the solution to this problem will just be to wait, generate images several times.
It sometimes bugs out when you delete images from your output folder. I think it has something to do with the files names. It looses track of which image it should show you.
This story started from here: 44 — It says in more detail what I have already tried to do.
I tell you: everything works well on ZLUDA, I can use lora in inpaint, but zluda cannot make images larger than 1024*1280, so I switched to directml. And it turned out that zluda and directml generate images at the same time for me.
But on directml inpaint, it doesn't work with any lora, even if I change the model and prompt with resolution. But everything works perfectly without lora! I was even able to make images with a resolution of 2048 2048 in 15-25 minutes in inpaint without lora. And I can also generate 2048 2048 images in 44 minutes in txt2img.
I made pictures in txt2img with arguments: (inpaint works here, but does not change the picture, the result of the picture is unchanged)
I did inpaint with arguments: (crashes, showing
press any key to continue...
)I tried to write other (--always-low-vram --all-in-fp16) arguments, but it didn't help.
And there was also
[W dml_heap_allocator.cc:120] DML allocator out of memory!
when there were arguments:I was able to set up the arguments (I don't remember which ones, and I can't reproduce it right now) once in such a way that after that I noticed that the generation in inpaint is going on! There are changes in the picture! But at the end of 100% generation, it also crashed and did not save the picture. All this time, the console was being spammed:
A clean install of the SD did not help. Windows reinstalled this month. Reinstalling git, python, amd driver (or installing pro) did not help. If anything, the swap file is on automatically and it's about 20GB. Putting the swap file 1.5 times more RAM did not help, but when I made a mistake and put 2 times more, then my generation was 29% and then the PC froze, but the cursor moved, I was able to complete the work.
Since everything works well in zluda, I think that the inpaint problem is not related to my PC.
*MAIN: I need to be able to use inpaint with lora and do it in pictures from 20482048.** It should work, TheFerumn on RX570 4GB has directml inpaint running even with lora.
Changed settings: The
optimizations/Batch cond/uncond
checkmark has been removed. There are TAESD inVAE/VAE type for decode
. They stand at 64 (a screenshot will be attached) inUpscaling/Tile size
.Live previews/Live preview display period
is set to 2.Approximately these are the parameters I set every time: Changing gpu weights 0/1024 — no difference.
Tile size:
Console after inpainting with any lora:
@lshqqytiger, please help me.