Open RadFox34 opened 12 months ago
that is
MemoryError
You need more memory so that you can run it successfully
Download the whl file of pytorch need many memory,8gb is not enough
or you can set more Virtual memory
Download the whl file of pytorch need many memory,8gb is not enough
I have 16gb memory and it was plenty to use this, but now it's an issue when attempting a reinstall. I also mentioned above that downloading the .whl of pytorch did not fix anything.
Maybe you can try use real python,not venv
run
python launch.py
python launch.py --lowvram
I tried that and it gave me this:
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: v1.4.1 Commit hash: f865d3e11647dfd6c7b2cdf90dde24680e58acd8 Traceback (most recent call last): File "D:\AI\stable-diffusion-webui\launch.py", line 38, in <module> main() File "D:\AI\stable-diffusion-webui\launch.py", line 29, in main prepare_environment() File "D:\AI\stable-diffusion-webui\modules\launch_utils.py", line 268, in prepare_environment raise RuntimeError( RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
so I tried the arg and it did this instead
D:\AI\stable-diffusion-webui>python launch.py --lowvram --skip-torch-cuda-test
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.4.1
Commit hash: f865d3e11647dfd6c7b2cdf90dde24680e58acd8
Installing requirements
Launching Web UI with arguments: --lowvram --skip-torch-cuda-test
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
Loading weights [812cd9f9d9] from D:\AI\stable-diffusion-webui\models\Stable-diffusion\anythingV3_fp16.ckpt
Exception in thread Thread-17 (first_time_calculation):
Traceback (most recent call last):
File "C:\Users\ninjj\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\ninjj\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "D:\AI\stable-diffusion-webui\modules\devices.py", line 170, in first_time_calculation
linear(x)
File "C:\Users\ninjj\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\stable-diffusion-webui\extensions-builtin\Lora\lora.py", line 400, in lora_Linear_forward
return torch.nn.Linear_forward_before_lora(self, input)
File "C:\Users\ninjj\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: "addmm_impl_cpu_" not implemented for 'Half'
preload_extensions_git_metadata for 7 extensions took 0.00s
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 7.7s (import torch: 2.5s, import gradio: 1.2s, import ldm: 0.6s, other imports: 1.4s, load scripts: 1.3s, create ui: 0.4s, gradio launch: 0.1s).
Creating model from config: D:\AI\stable-diffusion-webui\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Applying attention optimization: InvokeAI... done.
Textual inversion embeddings loaded(0):
loading stable diffusion model: RuntimeError
Traceback (most recent call last):
File "C:\Users\ninjj\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "C:\Users\ninjj\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\ninjj\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "D:\AI\stable-diffusion-webui\webui.py", line 306, in load_model
shared.sd_model # noqa: B018
File "D:\AI\stable-diffusion-webui\modules\shared.py", line 726, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "D:\AI\stable-diffusion-webui\modules\sd_models.py", line 422, in get_sd_model
load_model()
File "D:\AI\stable-diffusion-webui\modules\sd_models.py", line 510, in load_model
sd_model.cond_stage_model_empty_prompt = sd_model.cond_stage_model([""])
File "C:\Users\ninjj\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\stable-diffusion-webui\modules\sd_hijack_clip.py", line 229, in forward
z = self.process_tokens(tokens, multipliers)
File "D:\AI\stable-diffusion-webui\modules\sd_hijack_clip.py", line 254, in process_tokens
z = self.encode_with_transformers(tokens)
File "D:\AI\stable-diffusion-webui\modules\sd_hijack_clip.py", line 302, in encode_with_transformers
outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
File "C:\Users\ninjj\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1538, in _call_impl
result = forward_call(*args, **kwargs)
File "C:\Users\ninjj\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\clip\modeling_clip.py", line 811, in forward
return self.text_model(
File "C:\Users\ninjj\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\ninjj\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\clip\modeling_clip.py", line 721, in forward
encoder_outputs = self.encoder(
File "C:\Users\ninjj\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\ninjj\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\clip\modeling_clip.py", line 650, in forward
layer_outputs = encoder_layer(
File "C:\Users\ninjj\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\ninjj\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\clip\modeling_clip.py", line 378, in forward
hidden_states = self.layer_norm1(hidden_states)
File "C:\Users\ninjj\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\ninjj\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\normalization.py", line 190, in forward
return F.layer_norm(
File "C:\Users\ninjj\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\functional.py", line 2515, in layer_norm
return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
Stable diffusion model failed to load
@RadFox34
Here's how I got mine to run:
python launch.py --lowvram --skip-torch-cuda-test --no-half
A safe bet is to re-install all dependencies just to make sure you have a clean slate, I'm pulling on the same directory from a year ago
Is there an existing issue for this?
What happened?
I've already attempted solutions like reinstalling python 3.10.6, deleting venv, doing both at the same time, manually installing torchvision via a .whl file, none of those work. Webui.bat still gives this error.
Steps to reproduce the problem
What should have happened?
Stable Diffusion completes the installation and starts as normal.
Version or Commit where the problem happens
1.4.0
What Python version are you running on ?
Python 3.10.x
What platforms do you use to access the UI ?
Windows
What device are you running WebUI on?
Nvidia GPUs (GTX 16 below)
Cross attention optimization
None
What browsers do you use to access the UI ?
Mozilla Firefox
Command Line Arguments
List of extensions
No
Console logs