s9roll7 / animatediff-cli-prompt-travel

animatediff prompt travel
Apache License 2.0
1.19k stars 105 forks source link

[error] use "gradual_latent_hires_fix_map" can not use "scheduler": "k_dpmpp_sde" #198

Closed dancemanUK closed 9 months ago

dancemanUK commented 9 months ago

13:10:08 INFO diffuser_ver='0.23.0' cli.py:100 INFO Using generation config: Gacha-231210.json cli.py:313 INFO is_sdxl=False util.py:562 INFO is_v2=True util.py:540 INFO Using base model: runwayml\stable-diffusion-v1-5 cli.py:340 INFO Will save outputs to ./output\2023-12-10T13-10-08-gacha-nadenadesitai_v10 cli.py:348 INFO Checking motion module... generate.py:610 INFO Loading tokenizer... generate.py:634 INFO Loading text encoder... generate.py:636 13:10:09 INFO Loading VAE... generate.py:638 13:10:10 INFO Loading UNet... generate.py:640 13:10:15 INFO Loaded 453.20928M-parameter motion module unet.py:578 WARNING gradual_latent_hires_fix enable generate.py:654 WARNING model_config.scheduler=<DiffusionScheduler.k_dpmpp_sde: 'k_dpmpp_sde'> generate.py:655 WARNING If you are forced to exit with an error, change to euler_a or lcm generate.py:656 INFO Using scheduler "k_dpmpp_sde" (DPMSolverSinglestepScheduler) generate.py:660 INFO Loading weights from G:\EasyPromptAnime\animatediff-cli-prompt-travel\data\models\sd\nadenadesitai_v10.safetensors generate.py:665 13:10:18 INFO Merging weights into UNet... generate.py:682 13:10:19 INFO Creating AnimationPipeline... generate.py:732 INFO No TI embeddings found ti.py:104 INFO Sending pipeline to device "cuda" pipeline.py:33 INFO Selected data types: unet_dtype=torch.float16, tenc_dtype=torch.float16, vae_dtype=torch.bfloat16 device.py:90 INFO Using channels_last memory format for UNet and VAE device.py:111 13:10:20 INFO Saving prompt config to output directory cli.py:414 INFO Initialization complete! cli.py:422 INFO Generating 1 animations cli.py:423 INFO Running generation 1 of 1 cli.py:433 INFO Generation seed: 2335520471733975402 cli.py:439 INFO len( region_condi_list )=1 generate.py:1512 INFO len( region_list )=1 generate.py:1513 INFO apply_lcm_lora=False animation.py:2407 INFO controlnet_for_region=False animation.py:2436 INFO multi_uncond_mode=False animation.py:2437 INFO unet_batch_size=1 animation.py:2438 INFO prompt_encoder.get_condi_size()=2 animation.py:2501 56% ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 140/250 [ 0:01:05 < 0:00:52 , 2 it/s ] ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ G:\EasyPromptAnime\animatediff-cli-prompt-travel\src\animatediff\cli.py:442 in generate │ │ │ │ 439 │ │ │ logger.info(f"Generation seed: {seed}") │ │ 440 │ │ │ │ │ 441 │ │ │ │ │ ❱ 442 │ │ │ output = run_inference( │ │ 443 │ │ │ │ pipeline=g_pipeline, │ │ 444 │ │ │ │ n_prompt=n_prompt, │ │ 445 │ │ │ │ seed=seed, │ │ │ │ G:\EasyPromptAnime\animatediff-cli-prompt-travel\src\animatediff\generate.py:1515 in │ │ run_inference │ │ │ │ 1512 │ logger.info(f"{len( region_condi_list )=}") │ │ 1513 │ logger.info(f"{len( region_list )=}") │ │ 1514 │ │ │ ❱ 1515 │ pipeline_output = pipeline( │ │ 1516 │ │ negative_prompt=n_prompt, │ │ 1517 │ │ num_inference_steps=steps, │ │ 1518 │ │ guidance_scale=guidance_scale, │ │ │ │ G:\EasyPromptAnime\animatediff-cli-prompt-travel\venv\lib\site-packages\torch\utils_contextlib. │ │ py:115 in decorate_context │ │ │ │ 112 │ @functools.wraps(func) │ │ 113 │ def decorate_context(*args, *kwargs): │ │ 114 │ │ with ctx_factory(): │ │ ❱ 115 │ │ │ return func(args, *kwargs) │ │ 116 │ │ │ 117 │ return decorate_context │ │ 118 │ │ │ │ G:\EasyPromptAnime\animatediff-cli-prompt-travel\src\animatediff\pipelines\animation.py:3356 in │ │ call │ │ │ │ 3353 │ │ │ │ │ │ 3354 │ │ │ │ │ │ 3355 │ │ │ │ # compute the previous noisy sample x_t -> x_t-1 │ │ ❱ 3356 │ │ │ │ latents = self.scheduler.step( │ │ 3357 │ │ │ │ │ model_output=noise_pred, │ │ 3358 │ │ │ │ │ timestep=t, │ │ 3359 │ │ │ │ │ sample=latents, │ │ │ │ G:\EasyPromptAnime\animatediff-cli-prompt-travel\venv\lib\site-packages\diffusers\schedulers\sch │ │ eduling_dpmsolver_singlestep.py:847 in step │ │ │ │ 844 │ │ if order == 1: │ │ 845 │ │ │ self.sample = sample │ │ 846 │ │ │ │ ❱ 847 │ │ prev_sample = self.singlestep_dpm_solver_update(self.model_outputs, sample=self. │ │ 848 │ │ │ │ 849 │ │ # upon completion increase step index by one │ │ 850 │ │ self._step_index += 1 │ │ │ │ G:\EasyPromptAnime\animatediff-cli-prompt-travel\venv\lib\site-packages\diffusers\schedulers\sch │ │ eduling_dpmsolver_singlestep.py:771 in singlestep_dpm_solver_update │ │ │ │ 768 │ │ if order == 1: │ │ 769 │ │ │ return self.dpm_solver_first_order_update(model_output_list[-1], sample=samp │ │ 770 │ │ elif order == 2: │ │ ❱ 771 │ │ │ return self.singlestep_dpm_solver_second_order_update(model_output_list, sam │ │ 772 │ │ elif order == 3: │ │ 773 │ │ │ return self.singlestep_dpm_solver_third_order_update(model_output_list, samp │ │ 774 │ │ else: │ │ │ │ G:\EasyPromptAnime\animatediff-cli-prompt-travel\venv\lib\site-packages\diffusers\schedulers\sch │ │ eduling_dpmsolver_singlestep.py:580 in singlestep_dpm_solver_second_order_update │ │ │ │ 577 │ │ │ │ 578 │ │ h, h_0 = lambda_t - lambda_s1, lambda_s0 - lambda_s1 │ │ 579 │ │ r0 = h_0 / h │ │ ❱ 580 │ │ D0, D1 = m1, (1.0 / r0) (m0 - m1) │ │ 581 │ │ if self.config.algorithm_type == "dpmsolver++": │ │ 582 │ │ │ # See https://arxiv.org/abs/2211.01095 for detailed derivations │ │ 583 │ │ │ if self.config.solver_type == "midpoint": │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ RuntimeError: The size of tensor a (96) must match the size of tensor b (48) at non-singleton dimension 4

(venv) G:\EasyPromptAnime\animatediff-cli-prompt-travel>

dancemanUK commented 9 months ago

12:58:55 INFO Loaded 453.20928M-parameter motion module unet.py:578 INFO Using scheduler "euler_a" (EulerAncestralDiscreteScheduler) generate.py:660 INFO Loading weights from G:\EasyPromptAnime\animatediff-cli-prompt-travel\data\models\sd\nadenadesitai_v10.safetensors generate.py:665 12:58:59 INFO Merging weights into UNet... generate.py:682 12:59:00 INFO Creating AnimationPipeline... generate.py:732 INFO No TI embeddings found ti.py:104 INFO Sending pipeline to device "cuda" pipeline.py:33 INFO Selected data types: unet_dtype=torch.float16, tenc_dtype=torch.float16, vae_dtype=torch.bfloat16 device.py:90 INFO Using channels_last memory format for UNet and VAE device.py:111 12:59:01 INFO Saving prompt config to output directory cli.py:414 INFO Initialization complete! cli.py:422 INFO Generating 1 animations cli.py:423 INFO Running generation 1 of 1 cli.py:433 INFO Generation seed: 4638565004203528487 cli.py:439 INFO len( region_condi_list )=1 generate.py:1512 INFO len( region_list )=1 generate.py:1513 INFO apply_lcm_lora=False animation.py:2407 INFO controlnet_for_region=False animation.py:2436 INFO multi_uncond_mode=False animation.py:2437 INFO unet_batch_size=1 animation.py:2438 INFO prompt_encoder.get_condi_size()=2 animation.py:2501 100% ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 250/250 [ 0:06:13 < 0:00:00 , 0 it/s ] 13:05:47 INFO Generation complete, saving..

but use use "scheduler": "euler_a" is ok.

s9roll7 commented 9 months ago
WARNING gradual_latent_hires_fix enable generate.py:654
WARNING model_config.scheduler=<DiffusionScheduler.k_dpmpp_sde: 'k_dpmpp_sde'> generate.py:655
WARNING If you are forced to exit with an error, change to euler_a or lcm generate.py:656

This is expected behavior.