AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
142.76k stars 26.92k forks source link

[Bug]: OutOfMemoryError: CUDA out of memory. #8733

Closed lozn00 closed 1 year ago

lozn00 commented 1 year ago

Is there an existing issue for this?

What happened?

Occurs when the generation function is used a second time,

Steps to reproduce the problem

  1. click run server
  2. upload pic
  3. use txt 2 pic
  4. try aigan or any opera

What should have happened?

I checked that the video memory of my computer is sufficient and not exhausted. Secondly, 3g has been used as mentioned above, and it did not appear until the second time. Did I not release the video memory in time? Is there a video memory leak? How do you release video memory?

Commit where the problem happens

latest

What platforms do you use to access the UI ?

Windows

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

nNo ,just click webui.bat

List of extensions

扩展插件 | 网址 | 版本 | 更新 -- | -- | -- | -- sd-webui-controlnet | https://github.com/Mikubill/sd-webui-controlnet.git | de8fdeff (Sat Mar 18 16:12:14 2023) | 未知 stable-diffusion-webui-chinese | https://github.com/VinsonLaro/stable-diffusion-webui-chinese | 71b7f512 (Sun Mar 5 18:11:02 2023) | 未知 LDSR | built-in |   |   低秩微调模型(LoRA) | built-in |   |   ScuNET | built-in |   |   SwinIR | built-in |   |   prompt-bracket-checker | built-in |   |   扩展插件 网址 版本 更新 sd-webui-controlnet https://github.com/Mikubill/sd-webui-controlnet.git de8fdeff (Sat Mar 18 16:12:14 2023) 未知 stable-diffusion-webui-chinese https://github.com/VinsonLaro/stable-diffusion-webui-chinese 71b7f512 (Sun Mar 5 18:11:02 2023) 未知 LDSR [built-in](http://127.0.0.1:7860/) 低秩微调模型(LoRA) [built-in](http://127.0.0.1:7860/) ScuNET [built-in](http://127.0.0.1:7860/) SwinIR [built-in](http://127.0.0.1:7860/) prompt-bracket-checker [built-in](http://127.0.0.1:7860/) ### Console logs ```Shell File "D:\dev\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs) File "D:\dev\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps return self.inner_model.apply_model(*args, **kwargs) File "D:\dev\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs)) File "D:\dev\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__ return self.__orig_func(*args, **kwargs) File "D:\dev\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model x_recon = self.model(x_noisy, t, **cond) File "D:\dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "D:\dev\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1329, in forward out = self.diffusion_model(x, t, context=cc) File "D:\dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "D:\dev\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 776, in forward h = module(h, emb, context) File "D:\dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "D:\dev\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward x = layer(x, context) File "D:\dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "D:\dev\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 324, in forward x = block(x, context=context[i]) File "D:\dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "D:\dev\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 259, in forward return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint) File "D:\dev\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 114, in checkpoint return CheckpointFunction.apply(func, len(inputs), *args) File "D:\dev\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 129, in forward output_tensors = ctx.run_function(*ctx.input_tensors) File "D:\dev\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 262, in _forward x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x File "D:\dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "D:\dev\stable-diffusion-webui\modules\sd_hijack_optimizations.py", line 121, in split_cross_attention_forward raise RuntimeError(f'Not enough memory, use lower resolution (max approx. {max_res}x{max_res}). ' RuntimeError: Not enough memory, use lower resolution (max approx. 320x320). Need: 0.0GB free, Have:0.0GB free Error completing request Arguments: ('task(cee3o6nwfpgidat)', 0, 'girl fuck boy', 'broke a finger,ugly,duplicate,morbid,mutilated,tranny, trans, trannsexual,hermaphrodite,extra fingers,fused fingers,too many fingers,long neck,mutated hands,poorly drawn hands,poorly drawn face,mutation,deformed,blurry,bad anatomy,bad proportions,malformed limbs,extra limbs,cloned face,disfigured,more than 2 nipples,gross proportions,missing arms,missing legs,extra arms,extra legs,artist name,jpeg artifacts', [], , None, None, None, None, None, None, 20, 0, 4, 0, 1, False, False, 1, 1, 7, 1.5, 0.75, -1.0, -1.0, 0, 0, 0, False, 512, 512, 0, 0, 32, 0, '', '', '', [], 0, , '
    \n
  • CFG Scale should be 2 or lower.
  • \n
\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 1, 'None', '

Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8

', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '

Will upscale the image by the selected scale factor; use width and height sliders to set tile size

', 64, 0, 2, 1, '', 0, '', 0, '', True, False, False, False, 0, None, False, 50) {} Traceback (most recent call last): File "D:\dev\stable-diffusion-webui\modules\call_queue.py", line 56, in f res = list(func(*args, **kwargs)) File "D:\dev\stable-diffusion-webui\modules\call_queue.py", line 37, in f res = func(*args, **kwargs) File "D:\dev\stable-diffusion-webui\modules\img2img.py", line 171, in img2img processed = process_images(p) File "D:\dev\stable-diffusion-webui\modules\processing.py", line 486, in process_images res = process_images_inner(p) File "D:\dev\stable-diffusion-webui\modules\processing.py", line 577, in process_images_inner p.init(p.all_prompts, p.all_seeds, p.all_subseeds) File "D:\dev\stable-diffusion-webui\modules\processing.py", line 1023, in init self.init_latent = self.sd_model.get_first_stage_encoding(self.sd_model.encode_first_stage(image)) File "D:\dev\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs)) File "D:\dev\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__ return self.__orig_func(*args, **kwargs) File "D:\dev\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "D:\dev\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 830, in encode_first_stage return self.first_stage_model.encode(x) File "D:\dev\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\autoencoder.py", line 83, in encode h = self.encoder(x) File "D:\dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "D:\dev\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\model.py", line 526, in forward h = self.down[i_level].block[i_block](hs[-1], temb) File "D:\dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "D:\dev\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\model.py", line 138, in forward h = self.norm2(h) File "D:\dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "D:\dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\normalization.py", line 273, in forward return F.group_norm( File "D:\dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2528, in group_norm return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 128.00 MiB (GPU 0; 3.00 GiB total capacity; 2.34 GiB already allocated; 0 bytes free; 2.47 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF Error completing request Arguments: ('task(l8pvkchsw1iu1xd)', 0, 'girl fuck boy', 'broke a finger,ugly,duplicate,morbid,mutilated,tranny, trans, trannsexual,hermaphrodite,extra fingers,fused fingers,too many fingers,long neck,mutated hands,poorly drawn hands,poorly drawn face,mutation,deformed,blurry,bad anatomy,bad proportions,malformed limbs,extra limbs,cloned face,disfigured,more than 2 nipples,gross proportions,missing arms,missing legs,extra arms,extra legs,artist name,jpeg artifacts', [], , None, None, None, None, None, None, 20, 0, 4, 0, 1, False, False, 1, 1, 7, 1.5, 0.75, -1.0, -1.0, 0, 0, 0, False, 512, 512, 0, 0, 32, 0, '', '', '', [], 0, , '
    \n
  • CFG Scale should be 2 or lower.
  • \n
\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 1, 'None', '

Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8

', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '

Will upscale the image by the selected scale factor; use width and height sliders to set tile size

', 64, 0, 2, 1, '', 0, '', 0, '', True, False, False, False, 0, None, False, 50) {} Traceback (most recent call last): File "D:\dev\stable-diffusion-webui\modules\call_queue.py", line 56, in f res = list(func(*args, **kwargs)) File "D:\dev\stable-diffusion-webui\modules\call_queue.py", line 37, in f res = func(*args, **kwargs) File "D:\dev\stable-diffusion-webui\modules\img2img.py", line 171, in img2img processed = process_images(p) File "D:\dev\stable-diffusion-webui\modules\processing.py", line 486, in process_images res = process_images_inner(p) File "D:\dev\stable-diffusion-webui\modules\processing.py", line 577, in process_images_inner p.init(p.all_prompts, p.all_seeds, p.all_subseeds) File "D:\dev\stable-diffusion-webui\modules\processing.py", line 1023, in init self.init_latent = self.sd_model.get_first_stage_encoding(self.sd_model.encode_first_stage(image)) File "D:\dev\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs)) File "D:\dev\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__ return self.__orig_func(*args, **kwargs) File "D:\dev\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "D:\dev\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 830, in encode_first_stage return self.first_stage_model.encode(x) File "D:\dev\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\autoencoder.py", line 83, in encode h = self.encoder(x) File "D:\dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "D:\dev\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\model.py", line 526, in forward h = self.down[i_level].block[i_block](hs[-1], temb) File "D:\dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "D:\dev\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\model.py", line 138, in forward h = self.norm2(h) File "D:\dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "D:\dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\normalization.py", line 273, in forward return F.group_norm( File "D:\dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2528, in group_norm return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 128.00 MiB (GPU 0; 3.00 GiB total capacity; 2.34 GiB already allocated; 0 bytes free; 2.47 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF Traceback (most recent call last): File "D:\dev\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 337, in run_predict output = await app.get_blocks().process_api( File "D:\dev\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1013, in process_api inputs = self.preprocess_data(fn_index, inputs, state) File "D:\dev\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 911, in preprocess_data processed_input.append(block.preprocess(inputs[i])) IndexError: list index out of range ERROR: Exception in ASGI application Traceback (most recent call last): File "D:\dev\stable-diffusion-webui\venv\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 429, in run_asgi result = await app( # type: ignore[func-returns-value] File "D:\dev\stable-diffusion-webui\venv\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in __call__ return await self.app(scope, receive, send) File "D:\dev\stable-diffusion-webui\venv\lib\site-packages\fastapi\applications.py", line 273, in __call__ await super().__call__(scope, receive, send) File "D:\dev\stable-diffusion-webui\venv\lib\site-packages\starlette\applications.py", line 122, in __call__ await self.middleware_stack(scope, receive, send) File "D:\dev\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 184, in __call__ raise exc File "D:\dev\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 162, in __call__ await self.app(scope, receive, _send) File "D:\dev\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\gzip.py", line 24, in __call__ await responder(scope, receive, send) File "D:\dev\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\gzip.py", line 44, in __call__ await self.app(scope, receive, self.send_with_gzip) File "D:\dev\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 79, in __call__ raise exc File "D:\dev\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 68, in __call__ await self.app(scope, receive, sender) File "D:\dev\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in __call__ raise e File "D:\dev\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in __call__ await self.app(scope, receive, send) File "D:\dev\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 718, in __call__ await route.handle(scope, receive, send) File "D:\dev\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 276, in handle await self.app(scope, receive, send) File "D:\dev\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 66, in app response = await func(request) File "D:\dev\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 237, in app raw_response = await run_endpoint_function( File "D:\dev\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 163, in run_endpoint_function return await dependant.call(**values) File "D:\dev\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 402, in predict if app.get_blocks().dependencies[fn_index_inferred]["cancels"]: IndexError: list index out of range ``` ``` ### Additional information No
lozn00 commented 1 year ago

--------[ 鲁大师 ]----------------------------------------------------------------------------------

软件: 鲁大师 6.1022.3375.630 时间: 2023-03-19 22:02:18 软件: http://www.ludashi.com

--------[ 概览 ]----------------------------------------------------------------------------------

电脑型号 HUANANZHI 台式电脑 操作系统 Windows 10 专业版 精简版 64位(Version 21H2 / DirectX 12)

处理器 英特尔 Xeon(至强) E5-2660 v2 @ 2.20GHz 主板 HUANANZHI X79 (INTEL Xeon E5/Corei7 DMI2 - C600/C200 Cipset(Q67 芯片组) 显卡 NVIDIA GeForce GTX 1060 3GB ( 3 GB / NVIDIA ) 内存 64 GB ( 三星 DDR3L 1333MHz 16GB x 2 / 三星 DDR3L 1600MHz 16GB x 2 ) 主硬盘 三星 SSD 870 QVO 1TB (1 TB / 固态硬盘) 显示器 KIG2380 K24DJ ( 23.8 英寸 ) 声卡 GeneralPlus USB Audio Device 网卡 瑞昱 RTL8168/8111/8112 Gigabit Ethernet Controller

--------[ 主板 ]----------------------------------------------------------------------------------

主板型号 HUANANZHI X79 (INTEL Xeon E5/Corei7 DMI2 - C600/C200 Cipset 芯片组 Q67 芯片组 序列号 HNZ-202111152021 主板版本 V3.3 BIOS 安迈 Inc. 4.6.5 / BIOS程序发布日期: 11/15/2021 BIOS的大小 6144 KB

板载设备 视频设备 (启用)

--------[ 处理器 ]----------------------------------------------------------------------------------

处理器 英特尔 Xeon(至强) E5-2660 v2 @ 2.20GHz 速度 2.20 GHz 处理器数量 核心数:10 / 线程数:20 核心代号 Ivy Bridge E 生产工艺 22 纳米 插槽/插座 Socket R (LGA 2011) 一级数据缓存 10 x 32 KB, 8-Way, 64 byte lines 一级代码缓存 10 x 32 KB, 8-Way, 64 byte lines 二级缓存 10 x 256 KB, 8-Way, 64 byte lines 三级缓存 25 MB, 20-Way, 64 byte lines 特征 MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, HTT, EM64T, EIST, Turbo Boost

--------[ 硬盘 ]----------------------------------------------------------------------------------

产品 三星 SSD 870 QVO 1TB (固态硬盘) 大小 1 TB 固件 SVQ02B6Q 接口 SATA III 数据传输率 600 MB/秒 特性 S.M.A.R.T, 48-bit LBA, NCQ 硬盘已使用 共 328 次,累计 1307 小时

产品 西数 WD10EZEX-08WN4A0 大小 1 TB 固件 05.00H.4 缓存 64 MB 接口 SATA III 数据传输率 600.00 MB/秒 特性 S.M.A.R.T, APM, 48-bit LBA, NCQ 硬盘已使用 共 80 次,累计 337 小时 转速 7200 转/分

产品 西数 WD5000AAKX-08U6AA0 (蓝盘) 大小 500 GB 固件 05.0AG.3 缓存 16 MB 接口 SATA III 数据传输率 600.00 MB/秒 特性 S.M.A.R.T, 48-bit LBA, NCQ 硬盘已使用 共 175 次,累计 724 小时 转速 7200 转/分

--------[ 内存 ]----------------------------------------------------------------------------------

Node0_Dimm0 三星 DDR3L 1333MHz 16GB 制造日期 2012 年 27 周 型号 M393B2G70BH0-YH9 序列号 83D520D5 厂商 SAMSUNG 模块电压 SSTL 1.35V

Node0_Dimm2 三星 DDR3L 1600MHz 16GB 制造日期 2015 年 13 周 型号 M393B2G70DB0-YK0 序列号 71D41F51 厂商 SAMSUNG 模块电压 SSTL 1.35V

Node0_Dimm4 三星 DDR3L 1333MHz 16GB 制造日期 2012 年 27 周 型号 M393B2G70BH0-YH9 序列号 83D520DE 厂商 SAMSUNG 模块电压 SSTL 1.35V

Node0_Dimm6 三星 DDR3L 1600MHz 16GB 制造日期 2015 年 13 周 型号 M393B2G70DB0-YK0 序列号 71D41ED6 厂商 SAMSUNG 模块电压 SSTL 1.35V

--------[ 显卡 ]----------------------------------------------------------------------------------

主显卡 NVIDIA GeForce GTX 1060 3GB 显存 3 GB 频率 核心: 1518MHz / 显存: 2002MHz 显卡制造商 NVIDIA 芯片制造商 Nvidia BIOS版本 86.04.54.00.55 驱动版本 31.0.15.1694 驱动日期 20220721

--------[ 显示器 ]----------------------------------------------------------------------------------

产品 KIG2380 K24DJ 固件程序日期 2021 年 32 周 屏幕尺寸 23.8 英寸 (527 毫米 x 296 毫米) 分辨率 1920 x 1080 32 位真彩色 Gamma 2.20 电源管理 Active-Off

--------[ 其他设备 ]----------------------------------------------------------------------------------

网卡 Realtek PCIe GbE Family Controller

声卡 GeneralPlus USB Audio Device

声卡 英特尔 6 Series Chipset HD Audio Controller

声卡 NVIDIA @ NVIDIA High Definition Audio 控制器

键盘 HID 标准键盘 键盘 HID 标准键盘 鼠标 HID-compliant 鼠标 鼠标 HID-compliant 鼠标 鼠标 HID-compliant 鼠标

--------[ 传感器 ]----------------------------------------------------------------------------------

CPU温度 26℃ CPU核心 37℃ CPU封装 37℃ 显卡 31℃ 硬盘温度 40℃

lozn00 commented 1 year ago

NVIDIA GeForce GTX 1060 3GB ( 3 GB / NVIDIA )

vladmandic commented 1 year ago

this is better suited for discussions than issues. you have very low vram (3gb) and not using any of optimizations? and no cross-attention as well? no mention of torch version. and the issue text is amazingly bad - 99% is blank or irrelevant.

drax-xard commented 1 year ago

Have you tried any of this options: Optimizations

TernaryM01 commented 1 year ago

This does not belong in GitHub Issues. Just learn some basics of how to use WebUI properly, particularly to use flags such as --lowvram and --xformers because your GPU's VRAM is very low. Also, consider buying a better GPU.

elen07zz commented 1 year ago

Try generating 512x512 with these parameters --lowvram --xformers --always-batch-cond-uncond

lozn00 commented 1 year ago

thank you @drax-xard @elen07zz @TernaryM01 I try these methods at home at night

Akimotorakiyu commented 1 year ago

report : RTX 3080(10GB) OutOfMemoryError: CUDA out of memory. Tried to allocate 31.29 GiB (GPU 0; 9.77 GiB total capacity; 4.33 GiB already allocated; 3.99 GiB free; 4.45 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF