Closed SoftString181 closed 2 years ago
Do you have hardware acceleration enabled in windows?
Do you have hardware acceleration enabled in windows?
Nope, it's disabled
stopped detecting my GPU and instead, running it on my CPU
stopped detecting my GPU and instead, running it on my CPU
- How have you come to this conclusion? We'd like to help but we do need to verify your information and your initial conclusion before we can determine the fault.
Oops, sorry, I'll give more info:
So this is actually the 3rd program I use for SD, both of those past 2 worked, but I decided to switch to Web-ui because it had more features than others.
The first program I tried checks if you have a CUDA GPU every time you start it, that would be irrelevant, but when it was confirming I have one, all the other programs worked flawlessly.
However, after 2 days since I started using Web-ui, for no reason the models just didn't load, like an endless loading, didn't matter the size; so I thought it would work if I reinstalled it. But now when I open the "Webui-user", and start the setup, the CPU usage skyrockets and it freezes my computer; note that the usage of resources was normal in the first installation.
After trying to fix it for 1 day, just for testing I started that first program I mentioned before, and now it just gives me a message saying that I don't have a CUDA GPU and sets CPU as default.
Okay. To be clear; this does not sound like an SD issue. This sounds like your system needs troubleshooting at a deeper level. I'm more than willing to chase it down if we can.
To make further progress, it's troubleshooting Q&A all the way down. This isn't a known failure, and with what we have it's not obvious -- people are literally just guessing and providing things that helped them with issues previously. That could work on the next attempt, or not at all.
I'm not suggesting people stop suggesting fixes -- quite the opposite. If you *(anyone) think you can provide a fix, do so -- the best I can do is more manual level troubleshooting via Q&A.
When you say freezes my computer
what does that mean? Absolute hard-lock, no mouse cursor/movement, no keyboard interaction, no disk activity, completely dead screen? When does it freeze? What's shown in the prompt window?
When you say you 'reinstalled' it -- there's no installer. What does 'reinstalled' mean? Did you delete the entire folder and re-clone the repository, then re-run the .bat file? Have you also erased and re-downloaded the model itself, in case that's become corrupted?
Are you overclocking ANYTHING in your system AT ALL? (RAM XMP/DOCP/EOCP == overclocking, even if it's enabled by default)
Also, someone suggested that your page file might be too small and suggested to make it bigger. That's helpful but incredibly vague, so let's clarify -- what size is your page file, specifically? Have you used Task Manager's Performance panel to view RAM/pagefile usage? Also bear in mind, with minimal RAM and relying on a page file, there can be A LOT of disk IO and if you have a spinning disk instead of an SSD handling either your page file or the SD model, it can take an eternity for things to load.
EDIT: I also have to take issue with your title. webui-user is not known to be at fault, and has shown no indication. OTHER SOFTWARE is not detecting your CUDA GPU, this software is ALSO not working. That points commonality outside this software.
Actually hold on. Am I overthinking this? You said reinstalled, assuming you erased the folder and did the thing -- did you reinstate the low-vram options?
https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#4gb-videocard-support
EDIT: To clarify, because the Features page doesn't seem to note it specifically -- you need to add the --medvram
or --lowvram
argument to webui-user.bat NOT pass it as an argument via shortcut. (Or at least, passing as arument via shortcut failed for me in the past.)
Thanks, I'll try to be clear:
The freezing: It's as it sounds, absolutely nothing works, not even the mouse or the clock. It freezes a few seconds after the "Installing torch" message shows up.
Reinstall: I saved all the models and other important files related to them, and deleted the folder. Then I extracted the zip file again and tried to run the .bat file.
Overclock: I'm not quite sure about those terms, but it's specifically RAM and CPU, I'll link a picture here. (The amount of RAM usage there looks delayed, most of the time it crashes around 5600MBs).
Page file: Now I set it for the system to manage it, before I tried the max values (16000), but the results were the same.
In addition, I think it's a technical problem, in normal situations my computer can handle SD completely fine.
OK. That does address everything I've mentioned.
If you're not familiar with overclocking, it's safe to assume you're not doing it. In short, overclocking is running CPU/RAM/GPU/etc at speeds above their rated maximum, for a performance benefit. It's not dangerous but pushing it slightly too far can cause crashes. You're not doing that, so we're good there.
Your system is definitely running out of main RAM. "It worked before" might not really matter; this fork adds features CONSTANTLY. I've done a pull three times today, and there have been updates every time. It may be that your system cannot run it anymore, if you've updated. If you're still using an old downloaded zip, that clearly isn't the case, but it really does seem to be related to not enough system RAM, eventually causing a lock-up despite the page file existing. Can you do anything to decrease system RAM usage? Close browsers, shut off things in the system tray that aren't necessary like weather widgets and Steam and messengers and such?
EDIT: Also I have obligations beginning shortly, but I'll be back in ~16hrs ish to see if I can help further
I forgot to mention, I've tried it with the older version that worked and, again, same results.
About the RAM usage, I closed absolutely everything I could, I even ran it in safe mode.
I still think it's weird that the first program shows that my GPU doesn't have CUDA anymore, when it clearly said it did those 3 days ago when everything was working. Python was most likely running in my GPU, that's why it was stable.
EDIT: It's ok, take your time, we can continue later. I really appreciate the help tho, thanks
have you tried updating your gpu driver? when you reinstalled you might have installed an older version that is no longer supported?
Yes, I've tried updating everything, even reinstalling CUDA drivers, nothing worked :(
Update: I managed to get pass the freezing on the installation by installing PyTorch locally, which for some reason made the setup less laggy, so it worked. But now I'm back to the endless model loading.
However, I've reached to a temporary conclusion that the solution for this is actually the page file. My guess is that I probably messed it up at some point, which caused these chain of problems that led me for the installation error. What would be the recommended size for it?
10 minutes loading, it gave me this:
Traceback (most recent call last):
File "D:\stable-diffusion-webui-master\launch.py", line 172, in <module>
start_webui()
File "D:\stable-diffusion-webui-master\launch.py", line 167, in start_webui
webui.webui()
File "D:\stable-diffusion-webui-master\webui.py", line 92, in webui
initialize()
File "D:\stable-diffusion-webui-master\webui.py", line 85, in initialize
shared.sd_model = modules.sd_models.load_model()
File "D:\stable-diffusion-webui-master\modules\sd_models.py", line 195, in load_model
sd_model.to(shared.device)
File "D:\stable-diffusion-webui-master\venv\lib\site-packages\pytorch_lightning\core\mixins\device_dtype_mixin.py", line 113, in to
return super().to(*args, **kwargs)
File "D:\stable-diffusion-webui-master\venv\lib\site-packages\torch\nn\modules\module.py", line 927, in to
return self._apply(convert)
File "D:\stable-diffusion-webui-master\venv\lib\site-packages\torch\nn\modules\module.py", line 579, in _apply
module._apply(fn)
File "D:\stable-diffusion-webui-master\venv\lib\site-packages\torch\nn\modules\module.py", line 579, in _apply
module._apply(fn)
File "D:\stable-diffusion-webui-master\venv\lib\site-packages\torch\nn\modules\module.py", line 579, in _apply
module._apply(fn)
[Previous line repeated 2 more times]
File "D:\stable-diffusion-webui-master\venv\lib\site-packages\torch\nn\modules\module.py", line 602, in _apply
param_applied = fn(param)
File "D:\stable-diffusion-webui-master\venv\lib\site-packages\torch\nn\modules\module.py", line 925, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
RuntimeError: CUDA out of memory. Tried to allocate 74.00 MiB (GPU 0; 4.00 GiB total capacity; 3.39 GiB already allocated; 0 bytes free; 3.46 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
The recommended size for the page file is zero. I'm not trying to be snarky, I'll explain;
I have no page file. I have lots of RAM. 72gb in one system, 144gb in another, desktop runs 32gb.
The page file is to make up for having an insufficient amount of system RAM, at the penalty of "it really really sucks for speed" especially if you're on a spinning hdd instead of an ssd.
There is no size which works better than any other. Windows uses exactly as much space as it needs inside the page file to store the "too much" which won't fit in your RAM. A small file risks not holding enough and crashing/locking up. A large page file that's too big wastes space, but will neither help nor hinder performance. System managed just means it only expands if your system exhibits a high need, consistently, and shrinks if you don't use much for a long time.
If you're using a ton of page file aka swap space and experiencing extreme slowdown (known as 'thrashing') the only solutions are;
Remember, the system only uses page/swap when it will improve RAM performance. If you have enough, or way more than enough RAM, that almost never happens.
A Linux live boot disk would be enough on a high performance USB drive to boot an absolutely minimal distro, load python etc, and run the system but the reason most people express surprise that you've made it run before is simple.
Your system is on the current absolute minimum end of the spectrum of hardware which can run this network. You might get it to run. Maybe. Maybe consistently. Maybe every time. But it's barely going to fit, at best, and it'll only improve if optimizations outpace feature additions for a while, or something big happens with RAM/VRAM usage.
That's on the horizon, this is some of the fastest moving technology I've ever seen because it's reaching more and more hands which are SO eager to play and experiment and learn and improve, it's staggering.
Windows also supports ReadyBoost on extremely performant USB devices, I'm not sure if it's in Windows 11 but it's in 7 through, I believe. I have NO IDEA if ReadyBoost will improve things at all, and it may reduce the lifespan of the USB device in use due to high rewrite rates. I have NO CLUE if it'll help, but if anything might, third could maybe possibly. Lol.
Right now, getting it running on a system with those specs is attainable, but it's a delicate balance and right now there is no solid "works for everyone" and no established checklist for flight that I'm aware of. I'm sorry, but definitely keep trying, keep optimizing, and watch for updates. Right now you're at the edge, six to nine months ago some would've said you've got no hope.
Please let us know what you find, what works, what doesn't. Ask any questions you can think of. I'll try to help if I can, even if it's just explaining general things to help you understand what's going on and why.
Again, thank you very much for the support
I'm still wondering what happened for it to stop working? I was generating 512x512 images with 100 steps in 3 minutes with a 7GB model, and also with more things in the background completely fine. I hadn't updated or edited any of the files in the folder before I reinstalled it...
(Btw, I have a SSD, but I have SD installed in my HD, but I don't think that matters.)
EDIT: Now it loaded with the arguments --lowvram --always-batch-cond-uncond --precision full --no-half
. I wanted to load it with 0 arguments like before tho...
EDIT 2: It doesn't generate the images :(
Thanks for understanding.
Honestly I'm not sure what may've broken. We can try to work through it.
We'll start relatively close to the beginning. From which source did you install Python? MS Store, downloaded .exe, some sort of script/packager like Miniconda?
Other notes; (This is all to my understanding, I am not an expert, just an experienced hacker. Someone please let me know if I'm incorrect at any point and I'll make ammendments where appropriate. Thank you.)
--always-batch-cond-uncond
is not ideal for your case. The wording in the wiki seems easy to misunderstand. My understanding is that it overrides an optimization which is done when you run --lowvram
and --medvram
optimizations, and would worsen a low-memory scenario. I'm fairly sure my understanding is correct, if anyone knows better please correct me. Either way, try using that.--precision full
argument again overrides a memory optimization technique, this time in vram. "Normally" aka "previously" models were operated on in "full precision" aka via 32-bit floating point numbers. It's been discovered that 16-bit is not only sufficient, but it can provide better results in some models and saves exactly 50% of the memory footprint of that section of the model, in vram, as well as doubling the bandwidth required to process that specific model context.--no-half
similarly avoids converting the entire model itself from 32-bit into 16-bit precision (the reasoning is explained above) resulting in the model itself occupying twice as much vram.To my understanding, neither of the latter de-optimizations of the model in VRAM will improve main system RAM usage, either in throughput or footprint; I would suggest removing those, and simply going with the author-curated optimizations provided in --lowvram
or (only or) --medvram
unless, once you have it stable, a feature you require is gated behind 32-bit precision on either end.
I'll note the purpouse of these arguments, thanks.
In short: I had Python in my computer before, but it still gave me that message of "You don't have Python installed", so I got it by the MS Store, I had also installed Anaconda before. Now I uninstalled that version and I'm using v. 3.10.6.
In case you wanna know the whole process until I got to this point, it was like this:
I downloaded NKMD's Stable Diffusion and tested it, it was working well, but also giving me the "Out of memory" error when I tried to load bigger models. So I switched to Cmdr2's SD, it worked (and it's still working) flawlessly, but since it didn't have support to hypernetworks, I switched to Automatic1111's, which worked as well. It was taking 5 minutes average to load the same model I'm trying now, and I could still use things like Discord or the browser w/o any stutters or freezes while generating images.
Acknowledged, and you may also try the --xformers
option, it optimizes several things for better performance but may also improve low-memory situations, I'm not certain. It's known supported on Pascal cards and I'm fairly sure the 1050ti is Pascal.
Also just to be clear, are you putting the arguments within the webui-user.bat file or in the shortcut used to run it? Putting arguments in the shortcut does not work. This version of SD will indicate on the terminal which arguments it sees.
It's often really tricky to nail down why "it worked before but won't now" especially when steps have already been taken to attempt to fix things. I don't have any overarching insights, so my impulse is to continue from the bottom up.
I personally suggest uninstalling MS Store and Anaconda versions if you don't need them. Just use a regular Python 3.10 installer from python.org and that should be enough for this distribution. Some of the other models attempted to use them, but I've found raw Python / VENV installations work best most frequently, though some need a little tweaking. This build uses the venv method very well, and my suggestion for "ideal" may not fit everyone's definitions.
If you do uninstall the others, or have done so and wish to be sure;
Windows provides the "where" tool which we can use to ensure the active Python version available to reach is the one we want -- where python
should tell you which Python is being run and you can remove them manually if that isn't the one you expect it to be.
Remnants of Miniconda/etc should be in your user directory. On Windows, I find Win+R and running, literally, .
aka just a period, is the fastest route to said path. If you've removed and don't need Miniconda/etc, simply delete their named directories from that path. They shouldn't interfere, but if they aren't necessary there's no cause to keep them.
Once a stable external Python environment exists, this SD repo is set to build a venv for python (a virtual environment; a clone of everything Python-related for this tool) in the 'venv' directory alongside webui-user.bat etc
If you need/want to purge and rebuild the Python venv to ensure nothing stale/extra is present, simply delete the venv
folder and it'll re-create it. Nothing userdata-important is stored there, but obviously re-downloading can take some time.
That's all I can come up with for now.
I've tried installing Python again, and uninstalling the uncessary stuff, it still didn't work...
I have almost zero clue what to do right now... the only thing I can think of is it not detecting the GPU, even NMKD's SD is not detecting it anymore and using my CPU
Unfortunately, all I've got is "wait for optimizations" or upgrade your RAM. I'll let you know if I see anything likely to help outright but until then I'm also out of ideas.
I've tried installing Python again, and uninstalling the uncessary stuff, it still didn't work...
I have almost zero clue what to do right now... the only thing I can think of is it not detecting the GPU, even NMKD's SD is not detecting it anymore and using my CPU
If it wasn't detecting your GPU it would be telling you in the console. To be more certain use Task Manager to check GPU VRAM loading.
I reinstalled Windows, and it still gives me that error:
Traceback (most recent call last):
File "D:\stable-diffusion-webui-win-x64\launch.py", line 181, in <module>
start_webui()
File "D:\stable-diffusion-webui-win-x64\launch.py", line 176, in start_webui
webui.webui()
File "D:\stable-diffusion-webui-win-x64\webui.py", line 92, in webui
initialize()
File "D:\stable-diffusion-webui-win-x64\webui.py", line 85, in initialize
shared.sd_model = modules.sd_models.load_model()
File "D:\stable-diffusion-webui-win-x64\modules\sd_models.py", line 195, in load_model
sd_model.to(shared.device)
File "D:\stable-diffusion-webui-win-x64\venv\lib\site-packages\pytorch_lightning\core\mixins\device_dtype_mixin.py", line 113, in to
return super().to(*args, **kwargs)
File "D:\stable-diffusion-webui-win-x64\venv\lib\site-packages\torch\nn\modules\module.py", line 927, in to
return self._apply(convert)
File "D:\stable-diffusion-webui-win-x64\venv\lib\site-packages\torch\nn\modules\module.py", line 579, in _apply
module._apply(fn)
File "D:\stable-diffusion-webui-win-x64\venv\lib\site-packages\torch\nn\modules\module.py", line 579, in _apply
module._apply(fn)
File "D:\stable-diffusion-webui-win-x64\venv\lib\site-packages\torch\nn\modules\module.py", line 579, in _apply
module._apply(fn)
[Previous line repeated 2 more times]
File "D:\stable-diffusion-webui-win-x64\venv\lib\site-packages\torch\nn\modules\module.py", line 602, in _apply
param_applied = fn(param)
File "D:\stable-diffusion-webui-win-x64\venv\lib\site-packages\torch\nn\modules\module.py", line 925, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
RuntimeError: CUDA out of memory. Tried to allocate 74.00 MiB (GPU 0; 4.00 GiB total capacity; 3.39 GiB already allocated; 0 bytes free; 3.46 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Pressione qualquer tecla para continuar. . .
I checked the VRAM, and once it loads the "vae.pt" file, it goes up instantly, then crashes
EDIT: It isn't related to the vae file, it still crashes while the model is loading
Okay, we've established it clearly detects your GPU. You can stop worrying about that. It's using it, it detects it.
You haven't clarified that/if you're running your arguments correctly as I mentioned before, and you have only provided small clips of logs so I can't look myself.
Sorry, I forgot about that, but yes, I was running them in the Webui-user.bat
If you're only using --lowvram
as an argument and it's still crashing, and you're not doing anything else, I don't know what to suggest next.
Though I am curious about the directory structure there, you're running it from "stable-diffusion-webui-win-x64" -- why that name? That's not the name provided anywhere, and is semantically inaccurate. Are you sure you've cloned the correct git repository? What I'm looking at is the equivalent of seeing a Windows error message showing "App.dmg" as the executable attempting to be run.
Oh, I used Cmdr2's setup as an attempt to see if it made any difference, that's why the name, but results were the same. What it does is basically download the files from the original repository.
Btw, here's the full log:
venv "D:\stable-diffusion-webui-win-x64\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Commit hash: c8045c5ad4f99deb3a19add06e0457de1df62b05
Installing requirements for Web UI
Launching Web UI with arguments: --xformers --disable-safe-unpickle
Loading config from: D:\stable-diffusion-webui-win-x64\models\Stable-diffusion\model1.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Keeping EMAs of 688.
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 64, 64) = 16384 dimensions.
making attention of type 'vanilla' with 512 in_channels
Loading weights [e6e8e1fc] from D:\stable-diffusion-webui-win-x64\models\Stable-diffusion\model1.ckpt
Loading VAE weights from: D:\stable-diffusion-webui-win-x64\models\Stable-diffusion\model1.vae.pt
Traceback (most recent call last):
File "D:\stable-diffusion-webui-win-x64\launch.py", line 181, in <module>
start_webui()
File "D:\stable-diffusion-webui-win-x64\launch.py", line 176, in start_webui
webui.webui()
File "D:\stable-diffusion-webui-win-x64\webui.py", line 92, in webui
initialize()
File "D:\stable-diffusion-webui-win-x64\webui.py", line 85, in initialize
shared.sd_model = modules.sd_models.load_model()
File "D:\stable-diffusion-webui-win-x64\modules\sd_models.py", line 195, in load_model
sd_model.to(shared.device)
File "D:\stable-diffusion-webui-win-x64\venv\lib\site-packages\pytorch_lightning\core\mixins\device_dtype_mixin.py", line 113, in to
return super().to(*args, **kwargs)
File "D:\stable-diffusion-webui-win-x64\venv\lib\site-packages\torch\nn\modules\module.py", line 927, in to
return self._apply(convert)
File "D:\stable-diffusion-webui-win-x64\venv\lib\site-packages\torch\nn\modules\module.py", line 579, in _apply
module._apply(fn)
File "D:\stable-diffusion-webui-win-x64\venv\lib\site-packages\torch\nn\modules\module.py", line 579, in _apply
module._apply(fn)
File "D:\stable-diffusion-webui-win-x64\venv\lib\site-packages\torch\nn\modules\module.py", line 579, in _apply
module._apply(fn)
[Previous line repeated 2 more times]
File "D:\stable-diffusion-webui-win-x64\venv\lib\site-packages\torch\nn\modules\module.py", line 602, in _apply
param_applied = fn(param)
File "D:\stable-diffusion-webui-win-x64\venv\lib\site-packages\torch\nn\modules\module.py", line 925, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
RuntimeError: CUDA out of memory. Tried to allocate 74.00 MiB (GPU 0; 4.00 GiB total capacity; 3.39 GiB already allocated; 0 bytes free; 3.46 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Well... I tried installing it the proper way, but I think I fried something...
Now I'm almost 100% sure the root problem of this was something on the GPU getting corrupted or broken, makes sense why nothing was detecting CUDA drives, but I'll see if I can get in touch with someone who can fix that. So I'll close this issue for now.
If you don't mind, do you have Discord? Maybe we could still stay in touch, ofc, if it's not a problem for you. Just in case, mine is SoftString181 #3911.
Either way, thank you very much for your help. :)
Okay. You're using someone else's repo, posting logs here as if they're from this repo, NOT using memory-saving arguments for either -- which we've established are necessary for your setup. You're FLAILING, as indicated by the fact that somehow you've erased your bootloader while trying to "install it the proper way" -- while we're at it, which 'it' this time, what is the "proper way" you were following, what did you specifically DO? There's no "installer" so you're just using words hoping they work together. We've covered "installer" already. Be detailed, and be clear.
There is no proper way which even comes close your bootloader or any boot files, the UEFI/BIOS, or any of it. That's like saying you installed a cup in your cup holder "the proper way "and your car caught fire, they're SO separate it's not funny. That's hardware failure, or just doing random crap you see online. Stop flailing. If you don't know what you're doing, ask. If you don't know if what you're about to do is a good idea, ask.
The GPU going bad CAN NOT cause a boot failure in the manner you're showing. Your GPU is not bad, it wasn't corrupted, it's not the problem. Some critical Windows-related files on your drive have been erased or corrupted during one of the things you've tried to do.
Most importantly; STOP USING OTHER SD REPOS TO TROUBLESHOOT THIS SD REPO. Stop showing me logs from other things and acting like they're from THIS thing. No screenshots from another SD repo, no logs from another SD repo. Don't even INSTALL another SD repo, until you can figure out how to do ONE of them correctly. Stop assuming and claiming things like "it can't detect the GPU" because you don't know what you're doing. Your assertions aren't correct.
Delete EVERYTHING related to this. The models, the zips, the installers, everything. Do this before or after you get your computer running again.
--lowvram
AND NOTHING ELSE. Run webui-user.bat from THIS REPO. DO NOT run or install ANY OTHER SD REPOS until AFTER we're done troubleshooting this one, you're dirtying your environment and confusing the situation, and wasting my time.I need to be clear -- this is the last time I'm interested in trying unless you stop doing completely baffling things like showing me logs from SOMETHING ELSE.
Ok man, I'm sorry about that, I didn't want to waste your time, neither bother you, I'm really sorry if I did.
This last log I sent here was the ONLY one from the other files, the other ones I showed here were from the original repository, I promise. The only reason I've downloaded that one was because it looked faster for setting things up since I had reinstalled Windows, and I was also wondering if it would maybe fix something. But the files that resulted in that """boot error""" were from the original repository.
However from the information I've seen, it only downloads everything from the Stable Diffusion Webui repository, so I didn't think it would make a difference. I'll send the link here.
https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/2356
I will take your words though.
Also, sorry about the incorrect terms, my English is not very good, so I can get confused sometimes, I'm trying my best here.
Once I get my PC running again, I'll do everything you've said in the last message, get the files from the Stable Diffusion Web-ui repository and send the complete log here.
Alright! I managed to boot the PC again and I made a full reinstallation of Windows 10. I'll try to be as accurate as possible with my steps here:
--lowvram
argument, it took around 5 minutes to load the model. And hey, it finally worked!This were the amount of resources being used while generating a 512x512 image, with 20 steps and with the CFG scale of 7:
Full log:
venv "D:\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Commit hash: cf47d13c1e11fcb7169bac7488d2c39e579ee491
Installing requirements for Web UI
Launching Web UI with arguments: --lowvram
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
Downloading: 100%|███████████████████████████████████████████████████████████████████| 939k/939k [00:01<00:00, 615kB/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████| 512k/512k [00:00<00:00, 646kB/s]
Downloading: 100%|████████████████████████████████████████████████████████████████████████████| 389/389 [00:00<?, ?B/s]
Downloading: 100%|████████████████████████████████████████████████████████████████████████████| 905/905 [00:00<?, ?B/s]
Downloading: 100%|████████████████████████████████████████████████████████████████████████| 4.41k/4.41k [00:00<?, ?B/s]
Downloading: 100%|████████████████████████████████████████████████████████████████| 1.59G/1.59G [03:48<00:00, 7.48MB/s]
Loading weights [7460a6fa] from D:\stable-diffusion-webui\models\Stable-diffusion\sd-v1-4.ckpt
Global Step: 470000
Applying cross attention optimization (Doggettx).
Model loaded.
Loaded a total of 0 textual inversion embeddings.
Embeddings:
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [01:35<00:00, 4.76s/it]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [01:22<00:00, 4.10s/it]
(If you were wondering what model I was trying to load before, it's the Novel AI model.)
Omg, it took so long to figure it out, but the NAI model finally loads, thanks you so much for your help Codefaux, I really appreciate it! :)
Hey fantastic, glad to hear that resolved it.
I apologize for the harshness of my outburst, I get frustrated and respond badly and that's something I'm trying to work on. Overall, you've been good to work with.
Given the graphs, you could likely get away with --medvram
for a speed bump, and also try --xformers
with either med or low vram settings. None of these should require manual download or install of any other software/etc.
The --xformers
option uses an optimized library to replace another, but requires specific hardware (Intel + Pascal are supported directly so you're OK there) to enable without manual work. It shouldn't impact memory footprint, so it's worth trying out; you may see a speed improvement.
If there's anything else I can help with, let me know.
Oh also -- regarding your screenshots -- in Task Manager, on the Performance/GPU page, you can switch from Video Decode to CUDA via dropdown to see GPU CUDA usage. It'll usually be 100% but it's nice to see lol
It's ok, don't worry.
X-formers seem to be woking fine, and so far I'm not needing any of the VRAM arguments. The only problem I don't think there's a fix is the GPU compatibilty, apparently the GTX1050ti specifically, generates different results compared to the other GPUs; I don't really mind that though, it's working anyway.
Refering to issue #2541, I found the problem that's causing it. For some unknown reason the program stopped detecting my GPU and instead, running it on my CPU, consequently freezing the PC.
I checked the drivers and I'm 100% sure my GPU has CUDA support, so no idea why it isn't detecting it.
And as I've mentioned in the other report, it was working completely fine some days ago.
Can anyone help me with this please? I've been trying to solve this problem for days. :(
Desktop: