continue-revolution / sd-webui-animatediff

AnimateDiff for AUTOMATIC1111 Stable Diffusion WebUI
Other
2.91k stars 246 forks source link

[Bug]: 运行时错误:期望所有张量都在同一设备上,但发现至少有两个设备,cpu和cuda:0! #344

Closed Luckyjjjjjjj closed 7 months ago

Luckyjjjjjjj commented 7 months ago

Is there an existing issue for this?

Have you read FAQ on README?

What happened?

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument weight in method wrapper_CUDA___slow_conv2d_forward)

Steps to reproduce the problem

已更新到了最新的web ui 也把animatedif更新到了最新 就是报错

What should have happened?

正常进行

Commit where the problem happens

webui: 版本1.614afaaf8 extension: sd-webui-animatediff

What browsers do you use to access the UI ?

Microsoft Edge

Command Line Arguments

--xformers / --opt-sdp-attention
我尝试添加了这个命令,同时打开或关闭Optimize attention layers with sdp (torch >= 2.0.0 required),同时选择了优化方案:使用缩放点积(SDP)方案。都尝试过了就是报错运行时错误:期望所有张量都在同一设备上,但发现至少有两个设备,cpu和cuda:0!,同时选择了

Console logs

我不停的尝试解决,最后控制台在启动的时候就开始报错了,原理的错误日志我已经找不到了

Additional information

No response

continue-revolution commented 7 months ago

search for wrapper_CUDA___slow_conv2d_forward and you will find a bunch of related issues.

Luckyjjjjjjj commented 7 months ago

好的,非常感谢,可能是我之前搜索问题的关键词不对,所以没有看到相关内容。但是为什么我的4090在使用animatedif文生图的时候,只是提供了一个13秒的视频作为参考,也就添加了一个openpose的cn控制,但是花费了好久的时间,我看控制台在预处理的时候就花费了几十分钟,而且全程都是在使用我的cpu,我的cpu是满负荷运作的,等预处理结束后才开始使用我的gpu工作,我提供一部分控制台的信息给您看一下,麻烦您了,给我一些建议。 o create a public link, set share=True in launch(). Startup time: 13.8s (prepare environment: 4.0s, import torch: 2.8s, import gradio: 0.9s, setup paths: 0.5s, initialize shared: 0.2s, other imports: 0.5s, load scripts: 1.5s, create ui: 0.7s, gradio launch: 2.3s, add APIs: 0.2s). 2023-11-26 12:29:40,649 - AnimateDiff - INFO - AnimateDiff process start. 2023-11-26 12:29:40,649 - AnimateDiff - INFO - Loading motion module mm_sd_v15_v2.ckpt from E:\sd-webui-aki-v4.4\extensions\sd-webui-animatediff\model\mm_sd_v15_v2.ckpt 2023-11-26 12:29:41,247 - AnimateDiff - INFO - Guessed mm_sd_v15_v2.ckpt architecture: MotionModuleType.AnimateDiffV2 2023-11-26 12:29:42,809 - AnimateDiff - WARNING - Missing keys 2023-11-26 12:29:43,217 - AnimateDiff - INFO - Injecting motion module mm_sd_v15_v2.ckpt into SD1.5 UNet middle block. 2023-11-26 12:29:43,218 - AnimateDiff - INFO - Injecting motion module mm_sd_v15_v2.ckpt into SD1.5 UNet input blocks. 2023-11-26 12:29:43,218 - AnimateDiff - INFO - Injecting motion module mm_sd_v15_v2.ckpt into SD1.5 UNet output blocks. 2023-11-26 12:29:43,218 - AnimateDiff - INFO - Setting DDIM alpha. 2023-11-26 12:29:43,232 - AnimateDiff - INFO - Injection finished. 2023-11-26 12:29:43,232 - AnimateDiff - INFO - Hacking loral to support motion lora 2023-11-26 12:29:43,232 - AnimateDiff - INFO - Hacking CFGDenoiser forward function. 2023-11-26 12:29:43,232 - AnimateDiff - INFO - Hacking ControlNet. 2023-11-26 12:29:47,575 - ControlNet - INFO - Loading model: control_v11p_sd15_openpose [cab727d4] 2023-11-26 12:29:47,948 - ControlNet - INFO - Loaded state_dict from [E:\sd-webui-aki-v4.4\models\ControlNet\control_v11p_sd15_openpose.pth] 2023-11-26 12:29:47,949 - ControlNet - INFO - controlnet_default_config 2023-11-26 12:29:49,097 - ControlNet - INFO - ControlNet model control_v11p_sd15_openpose [cab727d4] loaded. 2023-11-26 12:29:53,823 - ControlNet - INFO - Loading preprocessor: dw_openpose_full 2023-11-26 12:29:53,823 - ControlNet - INFO - preprocessor resolution = 512 2023-11-26 12:42:35,588 - ControlNet - INFO - ControlNet Hooked - Time = 768.1564581394196

continue-revolution commented 7 months ago

使用CPU会很慢,建议用GPU处理CN。另外CN是sequential,不会batch in parallel。

Luckyjjjjjjj commented 7 months ago

还有一点补充一下,我64G的内存,就刚才说的工作内容,用掉了我将近40G的内存

Luckyjjjjjjj commented 7 months ago

您好,又要打扰您了,两个问题,第一个是您说的用gpu处理cn,是如何设置的,您如果觉得麻烦可以给我一个链接,我自行学习参考。第二个问题是我不过是做一个13秒的视频,用了一个openpose加一个高清修复,显示我内存不足了,我64g不够使用吗?我附上我的控制台内容信息,麻烦您了,这对我很重要。 Error completing request Arguments: ('task(4j58tub1hfcf2of)', '(masterpiece, best quality, detailed),1girl,solo,indoors,window,blue sky,hallway,souryuu asuka langley,interface headset,suspender skirt,red ribbon,white shirt,school uniform,socks,shoes,,', 'owres,bad anatomy,bad hand,paintings,sketches,(worst quality:2),(low quality:2),(normal quality:2),lowres,((monochrome)),((grayscale)),skin spots,acnes,skin blemishes,age spot,glans,extra fingers,fewer fingers,((watermark:2)),(white letters:1),(multi nipples),bad anatomy,bad hands,text,error,missing fingers,missing arms,missing legs,extra digit,fewer digits,cropped,worst quality,jpeg artifacts,signature,watermark,username,bad feet,{Multiple people},blurry,poorly drawn hands,poorly drawn face,mutation,deformed,extra limbs,extra arms,extra legs,malformed limbs,fused fingers,too many fingers,long neck,cross-eyed,mutated hands,polar lowres,bad body,bad proportions,gross proportions,wrong feet bottom render,abdominal stretch,briefs,knickers,kecks,thong,{{fused fingers}},{{bad body}},bad proportion body to legs,wrong toes,extra toes,missing toes,weird toes,2 body,2 pussy,2 upper,2 lower,2 head,3hand,3 feet,extra long leg,super long leg,mirrored image,mirrored noise,', [], 20, 'DPM++ 2M Karras', 1, 1, 7, 768, 512, True, 0.35, 2, 'R-ESRGAN 4x+ Anime6B', 10, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x0000013B070EC1F0>, 0, False, '', 0.8, 2545421382, False, -1, 0, 0, 0, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 3072, 192, True, True, True, False, <scripts.animatediff_ui.AnimateDiffProcess object at 0x0000013BD49C5900>, UiControlNetUnit(enabled=True, module='dw_openpose_full', model='control_v11p_sd15_openpose [cab727d4]', weight=1, image=None, resize_mode='Crop and Resize', low_vram=True, processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=True, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), 'NONE:0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\nALL:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1\nINS:1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0\nIND:1,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0\nINALL:1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0\nMIDD:1,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0\nOUTD:1,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,0\nOUTS:1,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1\nOUTALL:1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1\nALL0.5:0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', True, 0, 'values', '0,0.25,0.5,0.75,1', 'Block ID', 'IN05-OUT05', 'none', '', '0.5,1', 'BASE,IN00,IN01,IN02,IN03,IN04,IN05,IN06,IN07,IN08,IN09,IN10,IN11,M00,OUT00,OUT01,OUT02,OUT03,OUT04,OUT05,OUT06,OUT07,OUT08,OUT09,OUT10,OUT11', 1.0, 'black', '20', False, 'ATTNDEEPON:IN05-OUT05:attn:1\n\nATTNDEEPOFF:IN05-OUT05:attn:0\n\nPROJDEEPOFF:IN05-OUT05:proj:0\n\nXYZ:::1', False, False, False, False, 0, None, [], 0, False, [], [], False, 0, 1, False, False, 0, None, [], -2, False, [], False, 0, None, None, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50, 'NONE:0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\nALL:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1\nINS:1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0\nIND:1,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0\nINALL:1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0\nMIDD:1,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0\nOUTD:1,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,0\nOUTS:1,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1\nOUTALL:1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1\nALL0.5:0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', True, 0, 'values', '0,0.25,0.5,0.75,1', 'Block ID', 'IN05-OUT05', 'none', '', '0.5,1', 'BASE,IN00,IN01,IN02,IN03,IN04,IN05,IN06,IN07,IN08,IN09,IN10,IN11,M00,OUT00,OUT01,OUT02,OUT03,OUT04,OUT05,OUT06,OUT07,OUT08,OUT09,OUT10,OUT11', 1.0, 'black', '20', False, 'ATTNDEEPON:IN05-OUT05:attn:1\n\nATTNDEEPOFF:IN05-OUT05:attn:0\n\nPROJDEEPOFF:IN05-OUT05:proj:0\n\nXYZ:::1', False, False) {} Traceback (most recent call last): File "E:\sd-webui-aki-v4.4\modules\call_queue.py", line 57, in f res = list(func(*args, kwargs)) File "E:\sd-webui-aki-v4.4\modules\call_queue.py", line 36, in f res = func(*args, *kwargs) File "E:\sd-webui-aki-v4.4\modules\txt2img.py", line 55, in txt2img processed = processing.process_images(p) File "E:\sd-webui-aki-v4.4\modules\processing.py", line 734, in process_images res = process_images_inner(p) File "E:\sd-webui-aki-v4.4\extensions\sd-webui-animatediff\scripts\animatediff_cn.py", line 118, in hacked_processing_process_images_hijack return getattr(processing, '__controlnet_original_process_images_inner')(p, args, kwargs) File "E:\sd-webui-aki-v4.4\modules\processing.py", line 869, in process_images_inner samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts) File "E:\sd-webui-aki-v4.4\extensions\sd-webui-controlnet\scripts\hook.py", line 420, in process_sample return process.sample_before_CN_hack(*args, kwargs) File "E:\sd-webui-aki-v4.4\modules\processing.py", line 1161, in sample return self.sample_hr_pass(samples, decoded_samples, seeds, subseeds, subseed_strength, prompts) File "E:\sd-webui-aki-v4.4\modules\processing.py", line 1247, in sample_hr_pass samples = self.sampler.sample_img2img(self, samples, noise, self.hr_c, self.hr_uc, steps=self.hr_second_pass_steps or self.steps, image_conditioning=image_conditioning) File "E:\sd-webui-aki-v4.4\modules\sd_samplers_kdiffusion.py", line 188, in sample_img2img samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs)) File "E:\sd-webui-aki-v4.4\modules\sd_samplers_common.py", line 261, in launch_sampling return func() File "E:\sd-webui-aki-v4.4\modules\sd_samplers_kdiffusion.py", line 188, in samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs)) File "E:\sd-webui-aki-v4.4\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) File "E:\sd-webui-aki-v4.4\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m denoised = model(x, sigmas[i] s_in, extra_args) File "E:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "E:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, *kwargs) File "E:\sd-webui-aki-v4.4\extensions\sd-webui-animatediff\scripts\animatediff_infv2v.py", line 250, in mm_cfg_forward x_out = mm_sd_forward(self, x_in, sigma_in, cond_in, image_cond_in, make_condition_dict) # hook File "E:\sd-webui-aki-v4.4\extensions\sd-webui-animatediff\scripts\animatediff_infv2v.py", line 160, in mm_sd_forward out = self.inner_model( File "E:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) File "E:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, kwargs) File "E:\sd-webui-aki-v4.4\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), *kwargs) File "E:\sd-webui-aki-v4.4\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps return self.inner_model.apply_model(args, kwargs) File "E:\sd-webui-aki-v4.4\modules\sd_hijack_utils.py", line 17, in setattr(resolved_obj, func_path[-1], lambda *args, kwargs: self(*args, *kwargs)) File "E:\sd-webui-aki-v4.4\modules\sd_hijack_utils.py", line 28, in call return self.__orig_func(args, kwargs) File "E:\sd-webui-aki-v4.4\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model x_recon = self.model(x_noisy, t, cond) File "E:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "E:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(args, kwargs) File "E:\sd-webui-aki-v4.4\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward out = self.diffusion_model(x, t, context=cc) File "E:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "E:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, *kwargs) File "E:\sd-webui-aki-v4.4\extensions\sd-webui-controlnet\scripts\hook.py", line 827, in forward_webui raise e File "E:\sd-webui-aki-v4.4\extensions\sd-webui-controlnet\scripts\hook.py", line 824, in forward_webui return forward(args, kwargs) File "E:\sd-webui-aki-v4.4\extensions\sd-webui-controlnet\scripts\hook.py", line 561, in forward control = param.control_model(x=x_in, hint=hint, timesteps=timesteps, context=context, y=y) File "E:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "E:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, *kwargs) File "E:\sd-webui-aki-v4.4\extensions\sd-webui-controlnet\scripts\cldm.py", line 31, in forward return self.control_model(args, kwargs) File "E:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "E:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, *kwargs) File "E:\sd-webui-aki-v4.4\extensions\sd-webui-controlnet\scripts\cldm.py", line 314, in forward h = module(h, emb, context) File "E:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) File "E:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, kwargs) File "E:\sd-webui-aki-v4.4\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 100, in forward x = layer(x, context) File "E:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "E:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(args, kwargs) File "E:\sd-webui-aki-v4.4\repositories\generative-models\sgm\modules\attention.py", line 627, in forward x = block(x, context=context[i]) File "E:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "E:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, kwargs) File "E:\sd-webui-aki-v4.4\repositories\generative-models\sgm\modules\attention.py", line 459, in forward return checkpoint( File "E:\sd-webui-aki-v4.4\repositories\generative-models\sgm\modules\diffusionmodules\util.py", line 167, in checkpoint return func(inputs) File "E:\sd-webui-aki-v4.4\repositories\generative-models\sgm\modules\attention.py", line 483, in _forward x = self.ff(self.norm3(x)) + x File "E:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) File "E:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, kwargs) File "E:\sd-webui-aki-v4.4\repositories\generative-models\sgm\modules\attention.py", line 108, in forward return self.net(x) File "E:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "E:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, *kwargs) File "E:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\container.py", line 215, in forward input = module(input) File "E:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) File "E:\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, *kwargs) File "E:\sd-webui-aki-v4.4\repositories\generative-models\sgm\modules\attention.py", line 89, in forward return x F.gelu(gate) torch.cuda.OutOfMemoryError: Allocation on device 0 would exceed allowed memory. (out of memory) Currently allocated : 18.45 GiB Requested : 1.88 GiB Device limit : 23.99 GiB Free (according to CUDA): 0 bytes PyTorch limit (set by user-supplied memory fraction) : 17179869184.00 GiB

continue-revolution commented 7 months ago

SD1.5 + FP8

我不知道你是怎么让CN在CPU上跑的,默认设置就是在显卡上跑,建议检查一下Settings/ControlNet和你自己的UI面板

Luckyjjjjjjj commented 7 months ago

我检查过我的Settings/Controlnet并没有提及让cn在cpu上跑的任何设置,同时我的ui面板没做过任何相关改动。 然而现在重点已经不在这个cpu跑cn的问题了,时间慢我都可以接受,现在的重点是我无法跑完这个流程,报我内存不足的错误,我的内存32GX2,6000频率的,难道还无法完成这项任务吗?我并不是一个喜好更改设置的人,所以大部分设置我都是没有动过的,完全按照基本设置同时按照您介绍里设置的,现在让我很为难,我无法跑完整个流程,麻烦可以看一下我上个回复里的控制台信息吗

continue-revolution commented 7 months ago

Again, SD1.5 + FP8, 我能帮的就只能到这儿了

如果你想要让CN到显卡跑,我一般只会有--opt-sdp-attention和--no-half-vae两个参数,建议检查是否有其他参数。实在不行建议fresh reinstall

你的terminal output明显在跑sdxl,目前SDXL的motion module尚不成熟,不推荐使用sdxl

Luckyjjjjjjj commented 7 months ago

好的,感谢您,但是我声明一下,我并没有用sdxl的模型,我用的是AnyLoraCleanLinearMix-ClearVAE,您可以在C站上看一下,我看到的是sd1.5模型,所以我只是给您把这个信息给到您,也许是不是扩展里有什么地方出错了

continue-revolution commented 7 months ago

sgm是sdxl才会运行到的模块,我也不知道为什么你的webui会跑到sdxl那里去,这和你的模型选择有关,和插件无关。

Luckyjjjjjjj commented 7 months ago

好的,反正非常感谢,我会尝试用其他模型来试试

Luckyjjjjjjj commented 7 months ago

您好,想给您私信但又不知道怎么私信,所以在这边想再次请教一下您,因为不经过高清修复那一环出来的图片手部大部分都是有问题的,如果后期高清放大也不会太好,但是我如果开着高清修复跑文生图的animate加视频的话又会报错,我该如何才能做好这件事呢?