ljleb / prompt-fusion-extension

auto1111 webui extension for all sorts of prompt interpolations!
MIT License
259 stars 16 forks source link

What's wrong with this? #80

Closed erbierbier closed 7 months ago

erbierbier commented 7 months ago

The extension show " module 'torch' has no attribute 'concatenate' "。

Error completing request Arguments: ('task(tsv1bkdypotxc2o)', '{{masterpiece}}, {{best quality}}, illustration, 4K, painting, girl,fullbody, breasts, animal_ears, lion_ears, solo, large_breasts, navel, black_choker, open_mouth, looking_at_viewer, ((tuijinzhiwang)), blonde hair, T-shirt , short, ((messy hair)), medium hair, sit', 'lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry,one hand with more than 5 fingers,one hand with less than 5 fingers,morbid,multiple breasts,(mutated hands and fingers:1.5),(long body:1.3),(mutation,poorly drawn:1.2),liquid body,liquid tongue,disfigured,mutated,anatomical nonsense,long neck,bad shadow,fused breasts,bad breasts,poorly drawn breasts,huge breasts,missing breasts,(ugly:1.5),', [], 35, 'Euler a', 1, 1, 7, 768, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x000002A2A79ED780>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, 0.9, 5, '0.0001', False, 'None', '', 0.1, False, UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=512, threshold_a=64, threshold_b=64, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=512, threshold_a=64, threshold_b=64, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=512, threshold_a=64, threshold_b=64, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, '', 0.5, True, False, '', 'Lerp', False, False, 1, 0.15, False, 'OUT', ['OUT'], 5, 0, 'Bilinear', False, 'Bilinear', False, 'Lerp', '', '', False, False, None, True, False, False, 'positive', 'comma', 0, '

Keyframe Format:
Seed | Prompt or just Prompt

', '', 25, True, 5.0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, '

Keyframe Format:
Seed | Prompt or just Prompt

', '', 25, True, 5.0, None, None, False, None, None, False, None, None, False, 50) {} Traceback (most recent call last): File "E:\gong_ju_bao\novelai\stable-diffusion-webui - 副本\modules\call_queue.py", line 57, in f res = list(func(*args, kwargs)) File "E:\gong_ju_bao\novelai\stable-diffusion-webui - 副本\modules\call_queue.py", line 36, in f res = func(*args, *kwargs) File "E:\gong_ju_bao\novelai\stable-diffusion-webui - 副本\modules\txt2img.py", line 55, in txt2img processed = processing.process_images(p) File "E:\gong_ju_bao\novelai\stable-diffusion-webui - 副本\modules\processing.py", line 734, in process_images res = process_images_inner(p) File "E:\gong_ju_bao\novelai\stable-diffusion-webui - 副本\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack return getattr(processing, '__controlnet_original_process_images_inner')(p, args, kwargs) File "E:\gong_ju_bao\novelai\stable-diffusion-webui - 副本\modules\processing.py", line 857, in process_images_inner p.setup_conds() File "E:\gong_ju_bao\novelai\stable-diffusion-webui - 副本\modules\processing.py", line 1308, in setup_conds super().setup_conds() File "E:\gong_ju_bao\novelai\stable-diffusion-webui - 副本\modules\processing.py", line 469, in setup_conds self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, total_steps, [self.cached_uc], self.extra_network_data) File "E:\gong_ju_bao\novelai\stable-diffusion-webui - 副本\modules\processing.py", line 455, in get_conds_with_caching cache[1] = function(shared.sd_model, required_prompts, steps, hires_steps, shared.opts.use_old_scheduling) File "E:\gong_ju_bao\novelai\stable-diffusion-webui - 副本\extensions\prompt-fusion-extension\lib_prompt_fusion\hijacker.py", line 12, in wrapper return function(*args, *kwargs, original_function=self.__original_functions[attribute]) File "E:\gong_ju_bao\novelai\stable-diffusion-webui - 副本\extensions\prompt-fusion-extension\scripts\promptlang.py", line 70, in _hijacked_get_learned_conditioning schedules = [_sample_tensor_schedules(cond_tensor, real_total_steps, is_hires) File "E:\gong_ju_bao\novelai\stable-diffusion-webui - 副本\extensions\prompt-fusion-extension\scripts\promptlang.py", line 70, in schedules = [_sample_tensor_schedules(cond_tensor, real_total_steps, is_hires) File "E:\gong_ju_bao\novelai\stable-diffusion-webui - 副本\extensions\prompt-fusion-extension\scripts\promptlang.py", line 127, in _sample_tensor_schedules schedule_cond = tensor.interpolate(params, origin_cond, empty_cond.get()) File "E:\gong_ju_bao\novelai\stable-diffusion-webui - 副本\extensions\prompt-fusion-extension\lib_prompt_fusion\interpolation_tensor.py", line 21, in interpolate cond = self.interpolate_cond_rec(params, origin_cond, empty_cond) File "E:\gong_ju_bao\novelai\stable-diffusion-webui - 副本\extensions\prompt-fusion-extension\lib_prompt_fusion\interpolation_tensor.py", line 28, in interpolate_cond_rec return self.get_cond_point(params.step, origin_cond, empty_cond, params.slerp_scale) File "E:\gong_ju_bao\novelai\stable-diffusion-webui - 副本\extensions\prompt-fusion-extension\lib_prompt_fusion\interpolation_tensor.py", line 44, in get_cond_point res = schedule.cond.extend_like(origin_cond, empty_cond) File "E:\gong_ju_bao\novelai\stable-diffusion-webui - 副本\extensions\prompt-fusion-extension\lib_prompt_fusion\interpolation_tensor.py", line 223, in extend_like return TensorCondWrapper(torch.concatenate([self.original_cond] + [empty.original_cond] missing_size)) AttributeError: module 'torch' has no attribute 'concatenate'


ljleb commented 7 months ago

I'm not exactly sure why, but it looks like your version of torch does not have torch.concatenate. Easy fix should be to use torch.cat instead.

Out of curiosity, which version of torch are you using?

I am currently AFK, so I cannot push a fix immediately. I am open to pull requests, and otherwise I will push a fix later today.

ljleb commented 7 months ago

So to be clear, upon revision, changing torch.concatenate with torch.cat might fix this instance of the problem for you, but it is likely you will run into this problem with other repositories as well with your environment. For example, the sd-webui-controlnet extension also makes extensive use of torch.concatenate.

Ideally, you should update your version of pytorch to 2.1.0 or higher instead for a more reliable solution.

erbierbier commented 7 months ago

Thanks a lot,I reset the venv fold,then it work again.发自我的 iPhone在 2023年12月26日,上午4:13,ljleb @.***> 写道: Closed #80 as not planned.

—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: @.***>