Kosinkadink / ComfyUI-AnimateDiff-Evolved

Improved AnimateDiff for ComfyUI and Advanced Sampling Support
Apache License 2.0
2.76k stars 208 forks source link

[Rare bug] On second attempt after Queue Prompt, all additional runs have: Error occurred when executing KSampler: 'NoneType' object has no attribute '_parameters' #50

Closed verosment closed 3 months ago

verosment commented 1 year ago

I've been trying to generate videos but after 2 successful generations I get the following error in the console:

!!! Exception during processing !!!
Traceback (most recent call last):
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1236, in sample
    return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1206, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 161, in animatediff_sample
    return wrap_function_to_inject_xformers_bug_info(orig_comfy_sample)(model, *args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\model_utils.py", line 144, in wrapped_function
    return function_to_wrap(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 93, in sample
    samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 741, in sample
    samples = getattr(k_diffusion_sampling, "sample_{}".format(self.sampler))(self.model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 137, in sample_euler
    denoised = model(x, sigma_hat * s_in, **extra_args)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 322, in forward
    out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, cond_concat=cond_concat, model_options=model_options, seed=seed)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\external.py", line 125, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\external.py", line 151, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 310, in apply_model
    out = sampling_function(self.inner_model.apply_model, x, timestep, uncond, cond, cond_scale, cond_concat, model_options=model_options, seed=seed)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 527, in sliding_sampling_function
    cond, uncond = calc_cond_uncond_batch(model_function, cond, uncond, x, timestep, max_total_area, cond_concat, model_options)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 427, in calc_cond_uncond_batch
    output = model_function(input_x, timestep_, **c).chunk(batch_chunks)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 63, in apply_model
    return self.diffusion_model(xc, t, context=context, y=c_adm, control=control, transformer_options=transformer_options).float()
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 633, in forward
    h = forward_timestep_embed(self.middle_block, h, emb, context, transformer_options)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 71, in forward_timestep_embed
    x = layer(x, context)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\motion_module.py", line 398, in forward
    return self.temporal_transformer(input_tensor, encoder_hidden_states, attention_mask)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\motion_module.py", line 461, in forward
    hidden_states = self.norm(hidden_states)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 160, in new_forward
    args, kwargs = module._hf_hook.pre_forward(module, *args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 104, in pre_forward
    args, kwargs = hook.pre_forward(module, *args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 286, in pre_forward
    set_module_tensor_to_device(
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\utils\modeling.py", line 298, in set_module_tensor_to_device
    new_value = value.to(device)
NotImplementedError: Cannot copy out of meta tensor; no data!

This is the first time I've tried AnimateDiff so I'm not sure if it's something I did wrong on my end or if it's greater than that. I'm running this on Windows 11 with a GTX 1660 6GB, 16GB RAM and a Ryzen 5600X with the latest ComfyUI. I've also never opened a GitHub issue before so if there's anything you need please do tell me.

Here's the workflow I'm using (i have 256x256 latent image size so that i could quickly generate while i was trying to figure out what was going on): workflow

Kosinkadink commented 1 year ago

Hey, no worries! Did you install AnimateDiff using comfy manager, or a git clone? And can you try to turn off comfy manager/remove and see if your issue persists?

verosment commented 1 year ago

Hi, sadly i have already tried every single one of these. I initially installed with comfy manager, then removed comfy manager and reinstalled using git clone, then went to a brand new fresh installation of ComfyUI and the issue still persisted across all these attempts.

Kosinkadink commented 1 year ago

I will try to replicate your issue. What exactly do you mean by "after 2 generations"? Can you go into detail when it works vs it doesn't?

verosment commented 1 year ago

I can successfully generate 2 animated images before I get the error on the third, the first result usually taking around 230 seconds but the second taking only 50 seconds roughly. Here are my launch parameters if it helps; --windows-standalone-build --force-fp32 --preview-method auto --normalvram I also used --disable-xformers but i got the exact same error. I have tried many different launch options but they didn't seem to do anything for the error so i just left them. Also after the error I can successfully generate normal images without fail

Kosinkadink commented 1 year ago

Gotcha, can you try it without any additional arguments, so just --windows-standalone-build ?

verosment commented 1 year ago

Same error sadly

Kosinkadink commented 1 year ago

Hmm, someone in the other open issue a was able to solve their problem by using a different SD model - I think that SD1.5 inpaint models fail in unique ways, as SD1.5 models (noninpaint) are expected. But assuming that is not the issue, I am going to make a branch that is reverted back to my changes prior to Friday - once I make it, would you be down to git checkout that branch so we can see if this is a new or old issue?

verosment commented 1 year ago

Of course, I'd be more than happy to

Kosinkadink commented 1 year ago

I've made the branch - to switch to it, cd into ComfyUI-AnimateDiff-Evolved, and then do git pull, and then git checkout before-refactor-1. For future reference, you can then switch back to main later using git checkout main.

Kosinkadink commented 1 year ago

Crap, I forgot to push the branch - it's up now, but was not when I posted that comment.

Kosinkadink commented 1 year ago

If you want a simple workflow that will work with that old commit, use this link: https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/tree/before-refactor-1 (it's the readme from that branch)

verosment commented 1 year ago

I think this fixed the issue, I have so far generated 4 short low-step count gifs without error, but I have noticed it is much slower than it used to be which is quite noticeable on a system like mine (speeds have gone from roughly 3s/it on the main branch to 25/it on this older one)

Kosinkadink commented 1 year ago

Gotcha, so there is a super-rare, hard to reproduce bug in the new refactor. That at least help track down where the problem could lie. This is less of a fix as there are some other underlying issues in the old code that was the whole reason for my refactor, but things should kinda work the same in terms of the animatediff output - you will need to use Load Checkpoint w/ Noise Select though because back then I couldn't figure out how to properly go back and forth between the noise schedules.

I am going to spend part of today looking through to see what could be wrong, but it's so strange that everything just works for pretty much everyone else.

verosment commented 1 year ago

Thank you. Also, you mentioned ComfyUI portable in #51, so I manually installed it but i ran into the exact same issues, which didn't surprise me much considering i always try my best to keep ComfyUI up to date by putting git pull in the start .bat file. But i did notice something this time around, the branch that works for me never goes into lowvram mode, but in the case of the main branch, it crashes after generating one result with lowvram mode.

It will generate one in normally, then the second will be in lowvram which will work, but when it tries to generate the third in lowvram mode that is when the error occurs. I think lowvram mode could be related to the problem somehow.

Here is the first lowvram generation followed by the second lowvram generation, which is the one that errors it out:

got prompt
2
3
[AnimateDiffEvo] - INFO - Regular AnimateDiff activated - latents passed in (8) less or equal to context_length None.
[AnimateDiffEvo] - INFO - Injecting motion module animatediffMotion_v15V2.ckpt version v2.
loading new
loading in lowvram mode 1761.8534021377563
100%|████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:05<00:00,  5.31s/it]
[AnimateDiffEvo] - INFO - Ejecting motion module animatediffMotion_v15V2.ckpt version v2.
[AnimateDiffEvo] - INFO - Cleaning motion module from unet.
Prompt executed in 37.97 seconds
got prompt
2
3
[AnimateDiffEvo] - INFO - Regular AnimateDiff activated - latents passed in (8) less or equal to context_length None.
[AnimateDiffEvo] - INFO - Injecting motion module animatediffMotion_v15V2.ckpt version v2.
loading new
loading in lowvram mode 1804.9303255081177
  0%|                                                                                            | 0/1 [00:00<?, ?it/s]
[AnimateDiffEvo] - INFO - Ejecting motion module animatediffMotion_v15V2.ckpt version v2.
[AnimateDiffEvo] - INFO - Cleaning motion module from unet.
!!! Exception during processing !!!
Traceback (most recent call last):
  File "D:\apps\AI\COMFYUI3\ComfyUI\execution.py", line 152, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
  File "D:\apps\AI\COMFYUI3\ComfyUI\execution.py", line 82, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
  File "D:\apps\AI\COMFYUI3\ComfyUI\execution.py", line 75, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
  File "D:\apps\AI\COMFYUI3\ComfyUI\nodes.py", line 1236, in sample
    return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
  File "D:\apps\AI\COMFYUI3\ComfyUI\nodes.py", line 1206, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
  File "D:\apps\AI\COMFYUI3\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 161, in animatediff_sample
    return wrap_function_to_inject_xformers_bug_info(orig_comfy_sample)(model, *args, **kwargs)
  File "D:\apps\AI\COMFYUI3\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\model_utils.py", line 144, in wrapped_function
    return function_to_wrap(*args, **kwargs)
  File "D:\apps\AI\COMFYUI3\ComfyUI\comfy\sample.py", line 93, in sample
    samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
  File "D:\apps\AI\COMFYUI3\ComfyUI\comfy\samplers.py", line 742, in sample
    samples = getattr(k_diffusion_sampling, "sample_{}".format(self.sampler))(self.model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar)
  File "C:\Users\JohnJ\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\apps\AI\COMFYUI3\ComfyUI\comfy\k_diffusion\sampling.py", line 137, in sample_euler
    denoised = model(x, sigma_hat * s_in, **extra_args)
  File "C:\Users\JohnJ\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\apps\AI\COMFYUI3\ComfyUI\comfy\samplers.py", line 323, in forward
    out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, cond_concat=cond_concat, model_options=model_options, seed=seed)
  File "C:\Users\JohnJ\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\apps\AI\COMFYUI3\ComfyUI\comfy\k_diffusion\external.py", line 125, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "D:\apps\AI\COMFYUI3\ComfyUI\comfy\k_diffusion\external.py", line 151, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "D:\apps\AI\COMFYUI3\ComfyUI\comfy\samplers.py", line 311, in apply_model
    out = sampling_function(self.inner_model.apply_model, x, timestep, uncond, cond, cond_scale, cond_concat, model_options=model_options, seed=seed)
  File "D:\apps\AI\COMFYUI3\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 527, in sliding_sampling_function
    cond, uncond = calc_cond_uncond_batch(model_function, cond, uncond, x, timestep, max_total_area, cond_concat, model_options)
  File "D:\apps\AI\COMFYUI3\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 427, in calc_cond_uncond_batch
    output = model_function(input_x, timestep_, **c).chunk(batch_chunks)
  File "D:\apps\AI\COMFYUI3\ComfyUI\comfy\model_base.py", line 63, in apply_model
    return self.diffusion_model(xc, t, context=context, y=c_adm, control=control, transformer_options=transformer_options).float()
  File "C:\Users\JohnJ\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\apps\AI\COMFYUI3\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 634, in forward
    h = forward_timestep_embed(self.middle_block, h, emb, context, transformer_options)
  File "D:\apps\AI\COMFYUI3\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 71, in forward_timestep_embed
    x = layer(x, context)
  File "C:\Users\JohnJ\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\JohnJ\AppData\Local\Programs\Python\Python310\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "D:\apps\AI\COMFYUI3\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\motion_module.py", line 398, in forward
    return self.temporal_transformer(input_tensor, encoder_hidden_states, attention_mask)
  File "C:\Users\JohnJ\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\JohnJ\AppData\Local\Programs\Python\Python310\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "D:\apps\AI\COMFYUI3\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\motion_module.py", line 461, in forward
    hidden_states = self.norm(hidden_states)
  File "C:\Users\JohnJ\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\JohnJ\AppData\Local\Programs\Python\Python310\lib\site-packages\accelerate\hooks.py", line 160, in new_forward
    args, kwargs = module._hf_hook.pre_forward(module, *args, **kwargs)
  File "C:\Users\JohnJ\AppData\Local\Programs\Python\Python310\lib\site-packages\accelerate\hooks.py", line 104, in pre_forward
    args, kwargs = hook.pre_forward(module, *args, **kwargs)
  File "C:\Users\JohnJ\AppData\Local\Programs\Python\Python310\lib\site-packages\accelerate\hooks.py", line 286, in pre_forward
    set_module_tensor_to_device(
  File "C:\Users\JohnJ\AppData\Local\Programs\Python\Python310\lib\site-packages\accelerate\utils\modeling.py", line 298, in set_module_tensor_to_device
    new_value = value.to(device)
NotImplementedError: Cannot copy out of meta tensor; no data!

Prompt executed in 20.81 seconds
Kosinkadink commented 1 year ago

Interesting, I will see if I can force mine into low vram mode explicitly and replicate the issue, good catch and thanks for the heads up! If I can replicate it, then I can actually do trial and error with code to fix it. Otherwise, we are playing a long silly game of telephone lol

Kosinkadink commented 1 year ago

@Cedri4 I'm adding you to this thread so that we can chat without me having to retype messages.

As veresmont noticed, model loading in lowvram mode appears to be a possible issue with new code. For context, Cedri4 is rocking only 3GB of VRAM, while veresmont is rocking 6GB. Looking at his cedri4's logs, he also gets into low vram mode when using the old code, but does not crash on subsequent generations. So, only the new code is unhappy. What is most weird is that ced crashes on his second run, but veresmont does on his third.

I would like you guys to try out a couple things for me on the main branch. Prerequisites:

What to do:

TEST 1 (baseline)

1) boot up comfy ui 2) load the basic txt2img workflow in the main branch 3) select an SD model to use. We'll call this SD model A. 4) select a motion model to use (anything but 15_v2). We'll call this ADiff model A. 5) run the workflow once. 6) increment the seed 7) run it again. 8) EXPECTED BEHAVIOR: ced should now be crashed. Veresmont, run it once more so you crash too, and make note if the second image has any visible image degradation (or just post em here so I can compare). Copy all command line output, and send it my way as TEST 1 Results. 9) shut down comfy

TEST 2 (SD model change)

1) do steps 1-5 from TEST 1. 2) keep the seed the same, but now change to use your second SD model; we'll call this SD model B. DO NOT CHANGE YOUR MOTION MODEL. 3) run it again. 4) EXPECTED BEHAVIOR: this is where things might change. Ced, note if you have crashed now or not. Veresmont, run again and note if you have crashed. In the case you have not crashed yet, run it again with the same SD model B, DO NOT CHANGE THE MOTION MODEL. And note the results. Send the command line output as TEST 2 results. 5) shut down comfy

TEST 3 (ADiff model change)

1) do everything in TEST 2 but instead of switching SD models from A to B, stick with SD model A the whole time, and instead, switch from ADiff model A to ADiff model B after the first run. As before, note differences and try to run until you get the error. Results for this are Test 3 results. 2) remember to shut down comfy

EXTRA TESTS

1) If TEST 2 or TEST 3 (or both) yielded results different from TEST 1, repeat the tests that yielded different results, but this time, before running the workflow the nth time, where n is the run that would cause you to crash in TEST 2 or TEST 3, switch the SD model back to A (TEST 2) or switch the ADiff model back to A (TEST 3), and run it. If you don't crash at this point, switch back to models B (based on which test you're in), and run it again. If you still haven't crashed, run it again without changing models until you crash. Note the results as TEST2ALT and/or TEST3ALT. 2) shut down comfy

After we get results for these tests, I will review results and we can do more tests if needed. This will help me track down the issue immensely, and I can't replicate this on my end so this is the only way for me to narrow things down.

Kosinkadink commented 1 year ago

Oh yeah, also, save your generated gifs! Would be good to know if the output changes on runs that don't crash

Cedri4 commented 1 year ago

Test 1

The successful run (first) aaa_readme_00001_ The unsuccessful run: (second)

error from ComfyUI ```sh Error occurred when executing KSampler: Cannot copy out of meta tensor; no data! File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1236, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1206, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 161, in animatediff_sample return wrap_function_to_inject_xformers_bug_info(orig_comfy_sample)(model, *args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\model_utils.py", line 144, in wrapped_function return function_to_wrap(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 93, in sample samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 742, in sample samples = getattr(k_diffusion_sampling, "sample_{}".format(self.sampler))(self.model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 137, in sample_euler denoised = model(x, sigma_hat * s_in, **extra_args) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 323, in forward out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, cond_concat=cond_concat, model_options=model_options, seed=seed) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\external.py", line 125, in forward eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\external.py", line 151, in get_eps return self.inner_model.apply_model(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 311, in apply_model out = sampling_function(self.inner_model.apply_model, x, timestep, uncond, cond, cond_scale, cond_concat, model_options=model_options, seed=seed) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 527, in sliding_sampling_function cond, uncond = calc_cond_uncond_batch(model_function, cond, uncond, x, timestep, max_total_area, cond_concat, model_options) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 427, in calc_cond_uncond_batch output = model_function(input_x, timestep_, **c).chunk(batch_chunks) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 63, in apply_model return self.diffusion_model(xc, t, context=context, y=c_adm, control=control, transformer_options=transformer_options).float() File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 659, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, output_shape) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 71, in forward_timestep_embed x = layer(x, context) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\motion_module.py", line 398, in forward return self.temporal_transformer(input_tensor, encoder_hidden_states, attention_mask) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\motion_module.py", line 461, in forward hidden_states = self.norm(hidden_states) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 160, in new_forward args, kwargs = module._hf_hook.pre_forward(module, *args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 104, in pre_forward args, kwargs = hook.pre_forward(module, *args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 286, in pre_forward set_module_tensor_to_device( File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\utils\modeling.py", line 298, in set_module_tensor_to_device new_value = value.to(device) ```
error from Terminal ```sh !!! Exception during processing !!! Traceback (most recent call last): File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1236, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1206, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 161, in animatediff_sample return wrap_function_to_inject_xformers_bug_info(orig_comfy_sample)(model, *args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\model_utils.py", line 144, in wrapped_function return function_to_wrap(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 93, in sample samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 742, in sample samples = getattr(k_diffusion_sampling, "sample_{}".format(self.sampler))(self.model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 137, in sample_euler denoised = model(x, sigma_hat * s_in, **extra_args) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 323, in forward out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, cond_concat=cond_concat, model_options=model_options, seed=seed) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\external.py", line 125, in forward eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\external.py", line 151, in get_eps return self.inner_model.apply_model(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 311, in apply_model out = sampling_function(self.inner_model.apply_model, x, timestep, uncond, cond, cond_scale, cond_concat, model_options=model_options, seed=seed) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 527, in sliding_sampling_function cond, uncond = calc_cond_uncond_batch(model_function, cond, uncond, x, timestep, max_total_area, cond_concat, model_options) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 427, in calc_cond_uncond_batch output = model_function(input_x, timestep_, **c).chunk(batch_chunks) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 63, in apply_model return self.diffusion_model(xc, t, context=context, y=c_adm, control=control, transformer_options=transformer_options).float() File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 659, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, output_shape) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 71, in forward_timestep_embed x = layer(x, context) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\motion_module.py", line 398, in forward return self.temporal_transformer(input_tensor, encoder_hidden_states, attention_mask) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\motion_module.py", line 461, in forward hidden_states = self.norm(hidden_states) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 160, in new_forward args, kwargs = module._hf_hook.pre_forward(module, *args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 104, in pre_forward args, kwargs = hook.pre_forward(module, *args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 286, in pre_forward set_module_tensor_to_device( File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\utils\modeling.py", line 298, in set_module_tensor_to_device new_value = value.to(device) NotImplementedError: Cannot copy out of meta tensor; no data! Prompt executed in 3.23 seconds ```
Cedri4 commented 1 year ago

Test 2 with model B

Successful run (first) aaa_readme_00002_ Successful run (second) It has work in incredient but it was instant I doubt it has change anything and the size of the file is the same as the first aaa_readme_00004_ Unsuccessful run (third)

error from ComfyUI ```sh Error occurred when executing KSampler: Cannot copy out of meta tensor; no data! File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1236, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1206, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 161, in animatediff_sample return wrap_function_to_inject_xformers_bug_info(orig_comfy_sample)(model, *args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\model_utils.py", line 144, in wrapped_function return function_to_wrap(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 93, in sample samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 742, in sample samples = getattr(k_diffusion_sampling, "sample_{}".format(self.sampler))(self.model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 137, in sample_euler denoised = model(x, sigma_hat * s_in, **extra_args) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 323, in forward out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, cond_concat=cond_concat, model_options=model_options, seed=seed) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\external.py", line 125, in forward eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\external.py", line 151, in get_eps return self.inner_model.apply_model(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 311, in apply_model out = sampling_function(self.inner_model.apply_model, x, timestep, uncond, cond, cond_scale, cond_concat, model_options=model_options, seed=seed) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 527, in sliding_sampling_function cond, uncond = calc_cond_uncond_batch(model_function, cond, uncond, x, timestep, max_total_area, cond_concat, model_options) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 427, in calc_cond_uncond_batch output = model_function(input_x, timestep_, **c).chunk(batch_chunks) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 63, in apply_model return self.diffusion_model(xc, t, context=context, y=c_adm, control=control, transformer_options=transformer_options).float() File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 659, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, output_shape) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 71, in forward_timestep_embed x = layer(x, context) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\motion_module.py", line 398, in forward return self.temporal_transformer(input_tensor, encoder_hidden_states, attention_mask) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\motion_module.py", line 461, in forward hidden_states = self.norm(hidden_states) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 160, in new_forward args, kwargs = module._hf_hook.pre_forward(module, *args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 104, in pre_forward args, kwargs = hook.pre_forward(module, *args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 286, in pre_forward set_module_tensor_to_device( File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\utils\modeling.py", line 298, in set_module_tensor_to_device new_value = value.to(device) ```
error from Terminal ```sh !!! Exception during processing !!! Traceback (most recent call last): File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1236, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1206, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 161, in animatediff_sample return wrap_function_to_inject_xformers_bug_info(orig_comfy_sample)(model, *args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\model_utils.py", line 144, in wrapped_function return function_to_wrap(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 93, in sample samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 742, in sample samples = getattr(k_diffusion_sampling, "sample_{}".format(self.sampler))(self.model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 137, in sample_euler denoised = model(x, sigma_hat * s_in, **extra_args) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 323, in forward out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, cond_concat=cond_concat, model_options=model_options, seed=seed) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\external.py", line 125, in forward eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\external.py", line 151, in get_eps return self.inner_model.apply_model(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 311, in apply_model out = sampling_function(self.inner_model.apply_model, x, timestep, uncond, cond, cond_scale, cond_concat, model_options=model_options, seed=seed) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 527, in sliding_sampling_function cond, uncond = calc_cond_uncond_batch(model_function, cond, uncond, x, timestep, max_total_area, cond_concat, model_options) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 427, in calc_cond_uncond_batch output = model_function(input_x, timestep_, **c).chunk(batch_chunks) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 63, in apply_model return self.diffusion_model(xc, t, context=context, y=c_adm, control=control, transformer_options=transformer_options).float() File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 659, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, output_shape) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 71, in forward_timestep_embed x = layer(x, context) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\motion_module.py", line 398, in forward return self.temporal_transformer(input_tensor, encoder_hidden_states, attention_mask) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\motion_module.py", line 461, in forward hidden_states = self.norm(hidden_states) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 160, in new_forward args, kwargs = module._hf_hook.pre_forward(module, *args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 104, in pre_forward args, kwargs = hook.pre_forward(module, *args, **kwargs) File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 286, in pre_forward set_module_tensor_to_device( File "F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\utils\modeling.py", line 298, in set_module_tensor_to_device new_value = value.to(device) NotImplementedError: Cannot copy out of meta tensor; no data! Prompt executed in 3.46 seconds ```
Kosinkadink commented 1 year ago

Thanks for running them! But I think I may have been misunderstood in terms of the terminal output - I wanna see the full comfy UI terminal output since the time comfy boots up, so that I can see all the [AnimateDiffEvo] messages and any of the print statements from comfy - those will let me see what the plugin (and comfy) is trying to do

Cedri4 commented 1 year ago

Crap the terminal are already closed

Kosinkadink commented 1 year ago

If it wouldn't be too much trouble, that terminal output from the 3 tests is what I'm most interested in - some of those statements only get printed when certain things happen.

Cedri4 commented 1 year ago

For test 3 I have to use model a or b?

Kosinkadink commented 1 year ago

Technically, SD model A, but since you should booting comfy at the beginning of the test, either works. I'd do SD model A to be consistent.

Kosinkadink commented 1 year ago

Hopefully with the terminal output from TEST 1, TEST 2, and TEST 3, I can figure out where things go wrong, and then I can make a branch with more detailed print statements for you guys to run and send me the output of

Cedri4 commented 1 year ago

Test 3 with model A

Run 1 is successful Run 2 is successful basically, it was instant and the same as the first run with increment seed. Run 3 is unsuccessful with model A

error from terminal ```sh F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable>.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build ** ComfyUI start up time: 2023-09-24 19:21:42.260914 Prestartup times for custom nodes: 0.0 seconds: F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager Total VRAM 3072 MB, total RAM 32711 MB Trying to enable lowvram mode because your GPU seems to have 4GB or less. If you don't want this use: --normalvram xformers version: 0.0.21 Set vram state to: LOW_VRAM Device: cuda:0 NVIDIA GeForce GTX 1060 3GB : cudaMallocAsync VAE dtype: torch.float32 Using xformers cross attention Using xformers cross attention ### Loading: ComfyUI-Manager (V0.30.4) ### ComfyUI Revision: 1483 [76cdc809] Import times for custom nodes: 0.0 seconds: F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved 0.5 seconds: F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager Starting server To see the GUI go to: http://127.0.0.1:8188 FETCH DATA from: F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json [AnimateDiffEvo] - WARNING - ffmpeg could not be found. Outputs that require it have been disabled got prompt [AnimateDiffEvo] - WARNING - ffmpeg could not be found. Outputs that require it have been disabled model_type EPS adm 0 making attention of type 'vanilla-xformers' with 512 in_channels building MemoryEfficientAttnBlock with 512 in_channels... Working with z of shape (1, 4, 32, 32) = 4096 dimensions. making attention of type 'vanilla-xformers' with 512 in_channels building MemoryEfficientAttnBlock with 512 in_channels... missing {'cond_stage_model.logit_scale', 'cond_stage_model.text_projection'} left over keys: dict_keys(['cond_stage_model.transformer.text_model.embeddings.position_ids']) [AnimateDiffEvo] - INFO - Loading motion module mm-Stabilized_mid.pth [AnimateDiffEvo] - INFO - Using fp16, converting motion module to fp16 loading new [AnimateDiffEvo] - INFO - Regular AnimateDiff activated - latents passed in (8) less or equal to context_length None. [AnimateDiffEvo] - INFO - Injecting motion module mm-Stabilized_mid.pth version v1. loading new loading in lowvram mode 1115.5 100%|████████████████████████████████████████████████████████████████████████████████████| 5/5 [01:04<00:00, 12.99s/it] [AnimateDiffEvo] - INFO - Ejecting motion module mm-Stabilized_mid.pth version v1. [AnimateDiffEvo] - INFO - Cleaning motion module from unet. F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py:47: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). self.register_buffer('betas', torch.tensor(betas, dtype=torch.float32)) F:\Stable_diffusion\Test\ComfyUI_windows_portable_nvidia_cu118_or_cpu_28_08_2023_\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py:48: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). self.register_buffer('alphas_cumprod', torch.tensor(alphas_cumprod, dtype=torch.float32)) making attention of type 'vanilla-xformers' with 512 in_channels building MemoryEfficientAttnBlock with 512 in_channels... Working with z of shape (1, 4, 32, 32) = 4096 dimensions. making attention of type 'vanilla-xformers' with 512 in_channels building MemoryEfficientAttnBlock with 512 in_channels... [AnimateDiffEvo] - WARNING - ffmpeg could not be found. Outputs that require it have been disabled Prompt executed in 80.29 seconds got prompt [AnimateDiffEvo] - WARNING - ffmpeg could not be found. Outputs that require it have been disabled 3 [AnimateDiffEvo] - WARNING - ffmpeg could not be found. Outputs that require it have been disabled Prompt executed in 1.56 seconds got prompt [AnimateDiffEvo] - WARNING - ffmpeg could not be found. Outputs that require it have been disabled 3 [AnimateDiffEvo] - WARNING - ffmpeg could not be found. Outputs that require it have been disabled Prompt executed in 1.80 seconds got prompt [AnimateDiffEvo] - WARNING - ffmpeg could not be found. Outputs that require it have been disabled 3 [AnimateDiffEvo] - INFO - Regular AnimateDiff activated - latents passed in (8) less or equal to context_length None. [AnimateDiffEvo] - INFO - Injecting motion module mm-Stabilized_mid.pth version v1. loading new loading in lowvram mode 838.5868768692017 0%| | 0/5 [00:02

So this is the whole terminal with the first run 2/3 run with the fixed seed which was instant then 2unsuccessful run

Kosinkadink commented 1 year ago

Did you change the motion module in test 3 between the first run and the second run as the test instructions said?

Kosinkadink commented 1 year ago

Cause looking at the output, it was loading mm-Stabilized_mid.pth every single time, I don't see any change

Kosinkadink commented 1 year ago

And I'd also need the terminal output from tests 1 and 2

Kosinkadink commented 1 year ago

Be sure to follow the intructions to a T, so that I can draw accurate conclusions from what we get printed in the terminal

Kosinkadink commented 1 year ago

And if you need any clarification for any of them, let me know!

verosment commented 1 year ago

Hi, sorry for late response, just woke up. I will get to testing these immediately.

@Cedri4 I'm adding you to this thread so that we can chat without me having to retype messages.

As veresmont noticed, model loading in lowvram mode appears to be a possible issue with new code. For context, Cedri4 is rocking only 3GB of VRAM, while veresmont is rocking 6GB. Looking at his cedri4's logs, he also gets into low vram mode when using the old code, but does not crash on subsequent generations. So, only the new code is unhappy. What is most weird is that ced crashes on his second run, but veresmont does on his third.

I would like you guys to try out a couple things for me on the main branch. Prerequisites:

  • have at least 2 SD1.5-compatible checkpoints to use
  • have at least 2 motion modules to use

What to do:

TEST 1 (baseline)

  1. boot up comfy ui
  2. load the basic txt2img workflow in the main branch
  3. select an SD model to use. We'll call this SD model A.
  4. select a motion model to use (anything but 15_v2). We'll call this ADiff model A.
  5. run the workflow once.
  6. increment the seed
  7. run it again.
  8. EXPECTED BEHAVIOR: ced should now be crashed. Veresmont, run it once more so you crash too, and make note if the second image has any visible image degradation (or just post em here so I can compare). Copy all command line output, and send it my way as TEST 1 Results.
  9. shut down comfy

TEST 2 (SD model change)

  1. do steps 1-5 from TEST 1.
  2. keep the seed the same, but now change to use your second SD model; we'll call this SD model B. DO NOT CHANGE YOUR MOTION MODEL.
  3. run it again.
  4. EXPECTED BEHAVIOR: this is where things might change. Ced, note if you have crashed now or not. Veresmont, run again and note if you have crashed. In the case you have not crashed yet, run it again with the same SD model B, DO NOT CHANGE THE MOTION MODEL. And note the results. Send the command line output as TEST 2 results.
  5. shut down comfy

TEST 3 (ADiff model change)

  1. do everything in TEST 2 but instead of switching SD models from A to B, stick with SD model A the whole time, and instead, switch from ADiff model A to ADiff model B after the first run. As before, note differences and try to run until you get the error. Results for this are Test 3 results.
  2. remember to shut down comfy

EXTRA TESTS

  1. If TEST 2 or TEST 3 (or both) yielded results different from TEST 1, repeat the tests that yielded different results, but this time, before running the workflow the nth time, where n is the run that would cause you to crash in TEST 2 or TEST 3, switch the SD model back to A (TEST 2) or switch the ADiff model back to A (TEST 3), and run it. If you don't crash at this point, switch back to models B (based on which test you're in), and run it again. If you still haven't crashed, run it again without changing models until you crash. Note the results as TEST2ALT and/or TEST3ALT.
  2. shut down comfy

After we get results for these tests, I will review results and we can do more tests if needed. This will help me track down the issue immensely, and I can't replicate this on my end so this is the only way for me to narrow things down.

Sorry for the late response, i just woke up, I will get to testing these immediately.

Kosinkadink commented 1 year ago

No worries, I appreciate the help!

verosment commented 1 year ago

Is it fine to use animateddiffMotion_V15 (Not V2)?

Kosinkadink commented 1 year ago

yep. technically, the only reason why i say not V2 is because it has an extra little portion to it vs the other models. It still works just fine in AD, but for debugging purposes, I am trying to keep everything constant.

verosment commented 1 year ago

Is it alright if I change the resolution and steps? I only ask because it is running at 182s/it with 512x512 and 20 steps and says it will take 57 minutes for one result and I wish to run it at something like 256x256 at 10 steps, which will still get an intelligible result. image

verosment commented 1 year ago

I used the text2image example from the main branch but the only thing I changed was the Load VAE module as I don't have any external VAEs downloaded, so I switched it to use the VAE from the model. (I don't really know what VAEs exactly are to be honest). I also lowered the resolution down to 256x256 and changed steps to 10 so that I could actually run this at a suitable time as I'm going out later today and 1 hour for a single result is a bit out of my time budget right now. If it is required then I will run 512x512 20-step generations later tomorrow.

TEST 1 (baseline):

--Motion Model; animatediffMotion_v15 --Diffusion Model; epicrealism_naturalSin

CMD:

D:\apps\AI\COMFYUI2\ComfyUI_windows_portable>.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build --force-fp32 --preview-method auto --normalvram --use-pytorch-cross-attention --disable-xformers
Total VRAM 6144 MB, total RAM 16307 MB
Forcing FP32, if this improves things please report it.
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce GTX 1660 : cudaMallocAsync
VAE dtype: torch.float32
Using pytorch cross attention
Using pytorch cross attention

Import times for custom nodes:
   0.1 seconds: D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved

Starting server

To see the GUI go to: http://127.0.0.1:8188
got prompt
model_type EPS
adm 0
making attention of type 'vanilla-pytorch' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla-pytorch' with 512 in_channels
missing {'cond_stage_model.logit_scale', 'cond_stage_model.text_projection'}
left over keys: dict_keys(['cond_stage_model.transformer.text_model.embeddings.position_ids', 'model_ema.decay', 'model_ema.num_updates'])
[AnimateDiffEvo] - INFO - Loading motion module animatediffMotion_v15.ckpt
loading new
D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.  To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
  return self.fget.__get__(instance, owner)()
[AnimateDiffEvo] - INFO - Regular AnimateDiff activated - latents passed in (16) less or equal to context_length None.
[AnimateDiffEvo] - INFO - Injecting motion module animatediffMotion_v15.ckpt version v1.
loading new
100%|██████████████████████████████████████████████████████████████████████████████████| 10/10 [10:49<00:00, 64.96s/it]
[AnimateDiffEvo] - INFO - Ejecting motion module animatediffMotion_v15.ckpt version v1.
[AnimateDiffEvo] - INFO - Cleaning motion module from unet.
D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py:47: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  self.register_buffer('betas', torch.tensor(betas, dtype=torch.float32))
D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py:48: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  self.register_buffer('alphas_cumprod', torch.tensor(alphas_cumprod, dtype=torch.float32))
Prompt executed in 782.55 seconds
got prompt
2
3
[AnimateDiffEvo] - INFO - Regular AnimateDiff activated - latents passed in (16) less or equal to context_length None.
[AnimateDiffEvo] - INFO - Injecting motion module animatediffMotion_v15.ckpt version v1.
loading new
loading in lowvram mode 1763.1319274902344
100%|██████████████████████████████████████████████████████████████████████████████████| 10/10 [01:53<00:00, 11.30s/it]
[AnimateDiffEvo] - INFO - Ejecting motion module animatediffMotion_v15.ckpt version v1.
[AnimateDiffEvo] - INFO - Cleaning motion module from unet.
Prompt executed in 166.46 seconds
got prompt
2
3
[AnimateDiffEvo] - INFO - Regular AnimateDiff activated - latents passed in (16) less or equal to context_length None.
[AnimateDiffEvo] - INFO - Injecting motion module animatediffMotion_v15.ckpt version v1.
loading new
loading in lowvram mode 1907.7473125457764
  0%|                                                                                           | 0/10 [00:01<?, ?it/s]
[AnimateDiffEvo] - INFO - Ejecting motion module animatediffMotion_v15.ckpt version v1.
[AnimateDiffEvo] - INFO - Cleaning motion module from unet.
!!! Exception during processing !!!
Traceback (most recent call last):
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1236, in sample
    return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1206, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 161, in animatediff_sample
    return wrap_function_to_inject_xformers_bug_info(orig_comfy_sample)(model, *args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\model_utils.py", line 144, in wrapped_function
    return function_to_wrap(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 93, in sample
    samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 741, in sample
    samples = getattr(k_diffusion_sampling, "sample_{}".format(self.sampler))(self.model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 137, in sample_euler
    denoised = model(x, sigma_hat * s_in, **extra_args)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 322, in forward
    out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, cond_concat=cond_concat, model_options=model_options, seed=seed)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\external.py", line 125, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\external.py", line 151, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 310, in apply_model
    out = sampling_function(self.inner_model.apply_model, x, timestep, uncond, cond, cond_scale, cond_concat, model_options=model_options, seed=seed)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 527, in sliding_sampling_function
    cond, uncond = calc_cond_uncond_batch(model_function, cond, uncond, x, timestep, max_total_area, cond_concat, model_options)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 427, in calc_cond_uncond_batch
    output = model_function(input_x, timestep_, **c).chunk(batch_chunks)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 63, in apply_model
    return self.diffusion_model(xc, t, context=context, y=c_adm, control=control, transformer_options=transformer_options).float()
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 653, in forward
    h = forward_timestep_embed(module, h, emb, context, transformer_options, output_shape)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 71, in forward_timestep_embed
    x = layer(x, context)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\motion_module.py", line 398, in forward
    return self.temporal_transformer(input_tensor, encoder_hidden_states, attention_mask)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\motion_module.py", line 461, in forward
    hidden_states = self.norm(hidden_states)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 160, in new_forward
    args, kwargs = module._hf_hook.pre_forward(module, *args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 104, in pre_forward
    args, kwargs = hook.pre_forward(module, *args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 286, in pre_forward
    set_module_tensor_to_device(
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\utils\modeling.py", line 298, in set_module_tensor_to_device
    new_value = value.to(device)
NotImplementedError: Cannot copy out of meta tensor; no data!

Prompt executed in 31.14 seconds

Result 1: aaa_readme_00001_

Result 2: aaa_readme_00002_

Result 3: Throws the error.

Kosinkadink commented 1 year ago

Is it alright if I change the resolution and steps? I only ask because it is running at 182s/it with 512x512 and 20 steps and says it will take 57 minutes for one result and I wish to run it at something like 256x256 at 10 steps, which will still get an intelligible result. image

I think you can make the size super tiny if you'd like, evne something like 64x64 should work if you are really stretched on time

Kosinkadink commented 1 year ago

and you can have the steps be very low, too

verosment commented 1 year ago

I'll put it to 128 or 192 for the time being, gonna run test 2 now.

verosment commented 1 year ago

I changed the resolution to 128 instead of the 256 resolution I used in Test 1, but everything else should be the same. (Besides the model, of course.)

Test 2 (SD Model Change):

--Motion Model; animatediffMotion_v15 --Diffusion Model; dreamshaper_8

CMD:

D:\apps\AI\COMFYUI2\ComfyUI_windows_portable>.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build --force-fp32 --preview-method auto --normalvram --use-pytorch-cross-attention --disable-xformers
Total VRAM 6144 MB, total RAM 16307 MB
Forcing FP32, if this improves things please report it.
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce GTX 1660 : cudaMallocAsync
VAE dtype: torch.float32
Using pytorch cross attention
Using pytorch cross attention

Import times for custom nodes:
   0.1 seconds: D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved

Starting server

To see the GUI go to: http://127.0.0.1:8188
got prompt
model_type EPS
adm 0
making attention of type 'vanilla-pytorch' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla-pytorch' with 512 in_channels
missing {'cond_stage_model.logit_scale', 'cond_stage_model.text_projection'}
left over keys: dict_keys(['cond_stage_model.transformer.text_model.embeddings.position_ids'])
[AnimateDiffEvo] - INFO - Loading motion module animatediffMotion_v15.ckpt
loading new
D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.  To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
  return self.fget.__get__(instance, owner)()
[AnimateDiffEvo] - INFO - Regular AnimateDiff activated - latents passed in (16) less or equal to context_length None.
[AnimateDiffEvo] - INFO - Injecting motion module animatediffMotion_v15.ckpt version v1.
loading new
100%|██████████████████████████████████████████████████████████████████████████████████| 10/10 [01:52<00:00, 11.24s/it]
[AnimateDiffEvo] - INFO - Ejecting motion module animatediffMotion_v15.ckpt version v1.
[AnimateDiffEvo] - INFO - Cleaning motion module from unet.
D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py:47: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  self.register_buffer('betas', torch.tensor(betas, dtype=torch.float32))
D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py:48: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  self.register_buffer('alphas_cumprod', torch.tensor(alphas_cumprod, dtype=torch.float32))
Prompt executed in 154.85 seconds
got prompt
2
3
[AnimateDiffEvo] - INFO - Regular AnimateDiff activated - latents passed in (16) less or equal to context_length None.
[AnimateDiffEvo] - INFO - Injecting motion module animatediffMotion_v15.ckpt version v1.
loading new
loading in lowvram mode 1763.1319274902344
100%|██████████████████████████████████████████████████████████████████████████████████| 10/10 [00:20<00:00,  2.02s/it]
[AnimateDiffEvo] - INFO - Ejecting motion module animatediffMotion_v15.ckpt version v1.
[AnimateDiffEvo] - INFO - Cleaning motion module from unet.
Prompt executed in 23.97 seconds
got prompt
2
3
[AnimateDiffEvo] - INFO - Regular AnimateDiff activated - latents passed in (16) less or equal to context_length None.
[AnimateDiffEvo] - INFO - Injecting motion module animatediffMotion_v15.ckpt version v1.
loading new
loading in lowvram mode 1907.7473125457764
  0%|                                                                                           | 0/10 [00:01<?, ?it/s]
[AnimateDiffEvo] - INFO - Ejecting motion module animatediffMotion_v15.ckpt version v1.
[AnimateDiffEvo] - INFO - Cleaning motion module from unet.
!!! Exception during processing !!!
Traceback (most recent call last):
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1236, in sample
    return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1206, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 161, in animatediff_sample
    return wrap_function_to_inject_xformers_bug_info(orig_comfy_sample)(model, *args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\model_utils.py", line 144, in wrapped_function
    return function_to_wrap(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 93, in sample
    samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 741, in sample
    samples = getattr(k_diffusion_sampling, "sample_{}".format(self.sampler))(self.model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 137, in sample_euler
    denoised = model(x, sigma_hat * s_in, **extra_args)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 322, in forward
    out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, cond_concat=cond_concat, model_options=model_options, seed=seed)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\external.py", line 125, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\external.py", line 151, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 310, in apply_model
    out = sampling_function(self.inner_model.apply_model, x, timestep, uncond, cond, cond_scale, cond_concat, model_options=model_options, seed=seed)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 527, in sliding_sampling_function
    cond, uncond = calc_cond_uncond_batch(model_function, cond, uncond, x, timestep, max_total_area, cond_concat, model_options)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 427, in calc_cond_uncond_batch
    output = model_function(input_x, timestep_, **c).chunk(batch_chunks)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 63, in apply_model
    return self.diffusion_model(xc, t, context=context, y=c_adm, control=control, transformer_options=transformer_options).float()
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 653, in forward
    h = forward_timestep_embed(module, h, emb, context, transformer_options, output_shape)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 71, in forward_timestep_embed
    x = layer(x, context)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\motion_module.py", line 398, in forward
    return self.temporal_transformer(input_tensor, encoder_hidden_states, attention_mask)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\motion_module.py", line 461, in forward
    hidden_states = self.norm(hidden_states)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 160, in new_forward
    args, kwargs = module._hf_hook.pre_forward(module, *args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 104, in pre_forward
    args, kwargs = hook.pre_forward(module, *args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 286, in pre_forward
    set_module_tensor_to_device(
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\utils\modeling.py", line 298, in set_module_tensor_to_device
    new_value = value.to(device)
NotImplementedError: Cannot copy out of meta tensor; no data!

Prompt executed in 2.58 seconds

Result 1: aaa_readme_00003_

Result 2: aaa_readme_00004_

Result 3: Error.

Kosinkadink commented 1 year ago

And to be clear, did you change the SD model between your result 2 and 3 in this test? Not between test 1 and 2 i mean, but between the results in test 2.

verosment commented 1 year ago

No, the model stayed as dreamshaper_8 the whole time in Test 2.

verosment commented 1 year ago
  1. do everything in TEST 2 but instead of switching SD models from A to B, stick with SD model A the whole time, and instead, switch from ADiff model A to ADiff model B after the first run. As before, note differences and try to run until you get the error. Results for this are Test 3 results.

Does this mean generating the first one with ADiff model A and then the second with ADiff model B? What about after the second generation, should I switch back to A or keep it on B? Or should I run it all on ADiff model B?

Kosinkadink commented 1 year ago

You'll need to repeat Test 2 before you go for Test 3, but change the SD model between Result 2 and Result 3. Basically, I am testing here to see if the SD model it has in memory gets corrupted due to my code not cleaning it properly for one reason or another.

And Test 3 will have the same SD model between Result 2 and 3, but a different motion model.

verosment commented 1 year ago

So change the SD model on the third generation?

Kosinkadink commented 1 year ago

Yep, exactly, that's Test 2. And on Test 3, you will instead change the motion model.

verosment commented 1 year ago

Test 2 (SD Model Change (Properly)):

--Motion Model; animatediffMotion_v15 --Result 1 & 2 Diffusion Model; epicrealism_naturalSin --Result 3 Diffusion Model; dreamshaper_8

CMD:

D:\apps\AI\COMFYUI2\ComfyUI_windows_portable>.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build --force-fp32 --preview-method auto --normalvram --use-pytorch-cross-attention --disable-xformers
Total VRAM 6144 MB, total RAM 16307 MB
Forcing FP32, if this improves things please report it.
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce GTX 1660 : cudaMallocAsync
VAE dtype: torch.float32
Using pytorch cross attention
Using pytorch cross attention

Import times for custom nodes:
   0.1 seconds: D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved

Starting server

To see the GUI go to: http://127.0.0.1:8188
got prompt
model_type EPS
adm 0
making attention of type 'vanilla-pytorch' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla-pytorch' with 512 in_channels
missing {'cond_stage_model.text_projection', 'cond_stage_model.logit_scale'}
left over keys: dict_keys(['cond_stage_model.transformer.text_model.embeddings.position_ids', 'model_ema.decay', 'model_ema.num_updates'])
[AnimateDiffEvo] - INFO - Loading motion module animatediffMotion_v15.ckpt
loading new
D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.  To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
  return self.fget.__get__(instance, owner)()
[AnimateDiffEvo] - INFO - Regular AnimateDiff activated - latents passed in (16) less or equal to context_length None.
[AnimateDiffEvo] - INFO - Injecting motion module animatediffMotion_v15.ckpt version v1.
loading new
100%|██████████████████████████████████████████████████████████████████████████████████| 10/10 [01:54<00:00, 11.49s/it]
[AnimateDiffEvo] - INFO - Ejecting motion module animatediffMotion_v15.ckpt version v1.
[AnimateDiffEvo] - INFO - Cleaning motion module from unet.
D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py:47: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  self.register_buffer('betas', torch.tensor(betas, dtype=torch.float32))
D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py:48: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  self.register_buffer('alphas_cumprod', torch.tensor(alphas_cumprod, dtype=torch.float32))
Prompt executed in 152.51 seconds
got prompt
2
3
[AnimateDiffEvo] - INFO - Regular AnimateDiff activated - latents passed in (16) less or equal to context_length None.
[AnimateDiffEvo] - INFO - Injecting motion module animatediffMotion_v15.ckpt version v1.
loading new
loading in lowvram mode 1763.1319274902344
100%|██████████████████████████████████████████████████████████████████████████████████| 10/10 [00:20<00:00,  2.01s/it]
[AnimateDiffEvo] - INFO - Ejecting motion module animatediffMotion_v15.ckpt version v1.
[AnimateDiffEvo] - INFO - Cleaning motion module from unet.
Prompt executed in 23.58 seconds
got prompt
2
2
model_type EPS
adm 0
making attention of type 'vanilla-pytorch' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla-pytorch' with 512 in_channels
missing {'cond_stage_model.text_projection', 'cond_stage_model.logit_scale'}
left over keys: dict_keys(['cond_stage_model.transformer.text_model.embeddings.position_ids'])
loading new
[AnimateDiffEvo] - INFO - Regular AnimateDiff activated - latents passed in (16) less or equal to context_length None.
[AnimateDiffEvo] - INFO - Injecting motion module animatediffMotion_v15.ckpt version v1.
loading new
loading in lowvram mode 1907.7473125457764
  0%|                                                                                           | 0/10 [00:01<?, ?it/s]
[AnimateDiffEvo] - INFO - Ejecting motion module animatediffMotion_v15.ckpt version v1.
[AnimateDiffEvo] - INFO - Cleaning motion module from unet.
!!! Exception during processing !!!
Traceback (most recent call last):
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1236, in sample
    return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1206, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 161, in animatediff_sample
    return wrap_function_to_inject_xformers_bug_info(orig_comfy_sample)(model, *args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\model_utils.py", line 144, in wrapped_function
    return function_to_wrap(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 93, in sample
    samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 741, in sample
    samples = getattr(k_diffusion_sampling, "sample_{}".format(self.sampler))(self.model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 137, in sample_euler
    denoised = model(x, sigma_hat * s_in, **extra_args)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 322, in forward
    out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, cond_concat=cond_concat, model_options=model_options, seed=seed)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\external.py", line 125, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\external.py", line 151, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 310, in apply_model
    out = sampling_function(self.inner_model.apply_model, x, timestep, uncond, cond, cond_scale, cond_concat, model_options=model_options, seed=seed)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 527, in sliding_sampling_function
    cond, uncond = calc_cond_uncond_batch(model_function, cond, uncond, x, timestep, max_total_area, cond_concat, model_options)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 427, in calc_cond_uncond_batch
    output = model_function(input_x, timestep_, **c).chunk(batch_chunks)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 63, in apply_model
    return self.diffusion_model(xc, t, context=context, y=c_adm, control=control, transformer_options=transformer_options).float()
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 653, in forward
    h = forward_timestep_embed(module, h, emb, context, transformer_options, output_shape)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 71, in forward_timestep_embed
    x = layer(x, context)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\motion_module.py", line 398, in forward
    return self.temporal_transformer(input_tensor, encoder_hidden_states, attention_mask)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\motion_module.py", line 461, in forward
    hidden_states = self.norm(hidden_states)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 160, in new_forward
    args, kwargs = module._hf_hook.pre_forward(module, *args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 104, in pre_forward
    args, kwargs = hook.pre_forward(module, *args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 286, in pre_forward
    set_module_tensor_to_device(
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\utils\modeling.py", line 298, in set_module_tensor_to_device
    new_value = value.to(device)
NotImplementedError: Cannot copy out of meta tensor; no data!

Prompt executed in 79.64 seconds

Result 1: aaa_readme_00001_

Result 2: aaa_readme_00002_

Result 3: Error.

Kosinkadink commented 1 year ago

Btw, are result 1 and 2 in this test using the same seed?

Kosinkadink commented 1 year ago

wait, that was a dumb question, i guess since the models didnt change here that had to be a different seed