ArtVentureX / comfyui-animatediff

AnimateDiff for ComfyUI
Apache License 2.0
682 stars 44 forks source link

Can't free GPU memeory after first gen #53

Open wtyisjoe opened 11 months ago

wtyisjoe commented 11 months ago

As the title said, my graphic card is RTX 3050 laptop which has only 4GB VRAM. After first gen I got index out of range like #34. I found that the vram is not released via task manager but don't know how to fix that cause in A1111 I don't have to do so even if its generation is much slower thnn comfy ui (takes almost 3x time+).

Screenshot 2023-10-23 155650

artventuredev commented 11 months ago

After performing some tests, I believe the issue isn't related to VRAM not being released.

Here's my setup:

Comfy caches certain elements from previous generations to expedite subsequent runs, so not all VRAM is freed, but this shouldn't pose an issue.

To help replicate your exact VRAM requirements, could you please provide an exact copy of your workflow, including the VAE, checkpoint, and AnimateDiff module? It's best if you can include the download link for the VAE/checkpoint.

wtyisjoe commented 11 months ago

After performing some tests, I believe the issue isn't related to VRAM not being released.

Here's my setup:

  • I'm using an RTX 2060 6GB and started another process that consumes 2GB of VRAM.
  • I then loaded a simple workflow into ComfyUI and initiated generation.
  • After the first generation, VRAM usage dropped to 2.6GB (2GB from the other process and 0.6GB from Comfy).
  • The second generation was successful, and VRAM usage after generation was 3.6GB.
  • Subsequent runs were also successful, and VRAM usage remained stable at around 3.6-3.7GB after generation.

Comfy caches certain elements from previous generations to expedite subsequent runs, so not all VRAM is freed, but this shouldn't pose an issue.

To help replicate your exact VRAM requirements, could you please provide an exact copy of your workflow, including the VAE, checkpoint, and AnimateDiff module? It's best if you can include the download link for the VAE/checkpoint.

Here is my workflow: Screenshot 2023-10-23 232737

The checkpoint I used: meinapastel_v5 https://civitai.com/models/11866?modelVersionId=76206

Motion module: mm_sd_v15_v2.ckpt

artventuredev commented 11 months ago

I suspect that the issue arises from attempting to run a 3.5GB model on a 4GB GPU. During the first run, when nothing is cached, the model loads and runs smoothly. However, after the first run, the model is unloaded (transferred to system RAM) to conserve VRAM. When ComfyUI tries to reload it for the second run, it's unable to do so.

I've pruned the model down to 2GB, which should perform well on your system. Please give it a try: https://drive.google.com/file/d/1WO6Gpy-gSe5uFL9dvvRTOIoJGDYqT4y_/view?usp=sharing

wtyisjoe commented 11 months ago

I suspect that the issue arises from attempting to run a 3.5GB model on a 4GB GPU. During the first run, when nothing is cached, the model loads and runs smoothly. However, after the first run, the model is unloaded (transferred to system RAM) to conserve VRAM. When ComfyUI tries to reload it for the second run, it's unable to do so.

I've pruned the model down to 2GB, which should perform well on your system. Please give it a try: https://drive.google.com/file/d/1WO6Gpy-gSe5uFL9dvvRTOIoJGDYqT4y_/view?usp=sharing

It doesn't work for me. It always got error when the sampler is highlighted on comfyui , and I've tried other 2GB models ended up with the same result.

Here's the output:

got prompt [AnimateDiff] - INFO - Loading motion module mm_sd_v15_v2.ckpt got prompt [AnimateDiff] - INFO - Converting motion module to fp16. model_type EPS adm 0 Using xformers attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using xformers attention in VAE missing {'cond_stage_model.text_projection', 'cond_stage_model.logit_scale'} left over keys: dict_keys(['alphas_cumprod', 'alphas_cumprod_prev', 'betas', 'cond_stage_model.transformer.text_model.embeddings.position_ids', 'log_one_minus_alphas_cumprod', 'model_ema.decay', 'model_ema.num_updates', 'posterior_log_variance_clipped', 'posterior_mean_coef1', 'posterior_mean_coef2', 'posterior_variance', 'sqrt_alphas_cumprod', 'sqrt_one_minus_alphas_cumprod', 'sqrt_recip_alphas_cumprod', 'sqrt_recipm1_alphas_cumprod']) Requested to load SD1ClipModel Loading 1 new model [AnimateDiff] - INFO - Injecting motion module with method default. E:\comfyUI\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() return self.fget.get(instance, owner)() Requested to load BaseModel Loading 1 new model 100% 20/20 [04:17<00:00, 12.85s/it] E:\comfyUI\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py:48: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requiresgrad(True), rather than torch.tensor(sourceTensor). self.register_buffer('betas', torch.tensor(betas, dtype=torch.float32)) E:\comfyUI\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py:49: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requiresgrad(True), rather than torch.tensor(sourceTensor). self.register_buffer('alphas_cumprod', torch.tensor(alphas_cumprod, dtype=torch.float32)) [AnimateDiff] - INFO - Ejecting motion module with method default. Prompt executed in 288.74 seconds [AnimateDiff] - INFO - Injecting motion module with method default. Requested to load BaseModel Loading 1 new model loading in lowvram mode 1041.7423362731934 [AnimateDiff] - INFO - Ejecting motion module with method default. ERROR:root:!!! Exception during processing !!! ERROR:root:Traceback (most recent call last): File "E:\comfyUI\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "E:\comfyUI\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "E:\comfyUI\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list results.append(getattr(obj, func)(*slice_dict(input_data_all, i))) File "E:\comfyUI\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-animatediff\animatediff\sampler.py", line 295, in animatediff_sample return super().sample( File "E:\comfyUI\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1237, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) File "E:\comfyUI\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1207, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, File "E:\comfyUI\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 90, in sample real_model, positive_copy, negative_copy, noise_mask, models = prepare_sampling(model, noise.shape, positive, negative, noise_mask) File "E:\comfyUI\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 81, in prepare_sampling comfy.model_management.load_models_gpu([model] + models, comfy.model_management.batch_area_memory(noise_shape[0] noise_shape[2] noise_shape[3]) + inference_memory) File "E:\comfyUI\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 402, in load_models_gpu cur_loaded_model = loaded_model.model_load(lowvram_model_memory) File "E:\comfyUI\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 293, in model_load device_map = accelerate.infer_auto_device_map(self.real_model, max_memory={0: "{}MiB".format(lowvram_model_memory // (1024 1024)), "cpu": "16GiB"}) File "E:\comfyUI\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\utils\modeling.py", line 958, in infer_auto_device_map tied_moduleindex = [i for i, (n, ) in enumerate(modules_to_treat) if n in tied_param][0] IndexError: list index out of range

Prompt executed in 2.74 seconds

artventuredev commented 11 months ago

Can you try running this batch 16 workflow (simple text2img with batch of 16): batch16.json

I want to make sure that a normal batch 16 can work on multiple run.

wtyisjoe commented 11 months ago

Can you try running this batch 16 workflow (simple text2img with batch of 16): batch16.json

I want to make sure that a normal batch 16 can work on multiple run.

Yes, I can run it. I an even change checkpoints in queue and it run smoothly.

Screenshot 2023-10-24 203144

The first two queues were using anythingV5, the rest of them were the pruned ckpt you provided.

artventuredev commented 11 months ago

Well I'll do some deeper debugging to see if I can find anything.

wtyisjoe commented 11 months ago

Well I'll do some deeper debugging to see if I can find anything.

After some experiments, I found a interesting fact that if you run KSampler between animatediffsamplers it will work somehow.

I put a simple txt2img workflow after first gen of animatediff and i can run it again. However, I still can not run animatediff twice no matter how I adjust the workflow.

It also work well if I arrange the queue in " KS - animatediff - KS - animatediff - ......" and so on.

artventuredev commented 11 months ago

I suspect that switching workflows clears some cached data from the previous run, thereby reducing the VRAM usage. Could you monitor and compare the VRAM consumed after running the normal KS versus after an AnimateDiff run?

wtyisjoe commented 11 months ago

I suspect that switching workflows clears some cached data from the previous run, thereby reducing the VRAM usage. Could you monitor and compare the VRAM consumed after running the normal KS versus after an AnimateDiff run?

After the animatediff gen the vram usage is 1.6GB, then I run KS, it turn out to be 2.1GB.

If I try to run 768x768 gif, it will get out of memory errors and if i press queue again with 512x512 workflow I will got index out of range. I think it means no matter if the animatediff run successfully, it will still got index out of range after first queue.

I've updated comfy ui yesterday, here is the output now :

The first 768x768 workflow

got prompt [AnimateDiff] - INFO - Loading motion module mm_sd_v15_v2.ckpt [AnimateDiff] - INFO - Converting motion module to fp16. model_type EPS adm 0 Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using pytorch attention in VAE missing {'cond_stage_model.logit_scale', 'cond_stage_model.text_projection'} left over keys: dict_keys(['alphas_cumprod', 'alphas_cumprod_prev', 'betas', 'cond_stage_model.transformer.text_model.embeddings.position_ids', 'log_one_minus_alphas_cumprod', 'model_ema.decay', 'model_ema.num_updates', 'posterior_log_variance_clipped', 'posterior_mean_coef1', 'posterior_mean_coef2', 'posterior_variance', 'sqrt_alphas_cumprod', 'sqrt_one_minus_alphas_cumprod', 'sqrt_recip_alphas_cumprod', 'sqrt_recipm1_alphas_cumprod']) Requested to load SD1ClipModel Loading 1 new model [AnimateDiff] - INFO - Injecting motion module with method default. E:\comfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() return self.fget.get(instance, owner)() Requested to load BaseModel Loading 1 new model | 0/20 [00:29<?, ?it/s] E:\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py:49: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requiresgrad(True), rather than torch.tensor(sourceTensor). self.register_buffer('betas', torch.tensor(betas, dtype=torch.float32)) E:\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py:50: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requiresgrad(True), rather than torch.tensor(sourceTensor). self.register_buffer('alphas_cumprod', torch.tensor(alphas_cumprod, dtype=torch.float32)) [AnimateDiff] - INFO - Ejecting motion module with method default. ERROR:root:!!! Exception during processing !!! ERROR:root:Traceback (most recent call last): File "E:\comfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\comfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\comfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-animatediff\animatediff\sampler.py", line 295, in animatediff_sample return super().sample( ^^^^^^^^^^^^^^^ File "E:\comfyUI\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1237, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\comfyUI\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1207, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 100, in sample samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 728, in sample return sample(self.model, noise, positive, negative, cfg, self.device, sampler(), sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 633, in sample samples = sampler.sample(model_wrap, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 589, in sample samples = getattr(k_diffusionsampling, "sample{}".format(sampler_name))(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\comfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "E:\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 140, in sample_euler callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigma_hat, 'denoised': denoised}) File "E:\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 576, in k_callback = lambda x: callback(x["i"], x["denoised"], x["x"], total_steps) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\comfyUI\ComfyUI_windows_portable\ComfyUI\latent_preview.py", line 97, in callback preview_bytes = previewer.decode_latent_to_preview_image(preview_format, x0) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\comfyUI\ComfyUI_windows_portable\ComfyUI\latent_preview.py", line 17, in decode_latent_to_preview_image preview_image = self.decode_latent_to_preview(x0) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\comfyUI\ComfyUI_windows_portable\ComfyUI\latent_preview.py", line 25, in decode_latent_to_preview x_sample = self.taesd.decoder(x0)[0].detach() ^^^^^^^^^^^^^^^^^^^^^^ File "E:\comfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\comfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\comfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\container.py", line 215, in forward input = module(input) ^^^^^^^^^^^^^ File "E:\comfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\comfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\taesd\taesd.py", line 25, in forward return self.fuse(self.conv(x) + self.skip(x)) ^^^^^^^^^^^^ File "E:\comfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\comfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\comfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\container.py", line 215, in forward input = module(input) ^^^^^^^^^^^^^ File "E:\comfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\comfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\comfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\activation.py", line 101, in forward return F.relu(input, inplace=self.inplace) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\comfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\functional.py", line 1471, in relu result = torch.relu(input) ^^^^^^^^^^^^^^^^^ torch.cuda.OutOfMemoryError: Allocation on device 0 would exceed allowed memory. (out of memory) Currently allocated : 6.98 GiB Requested : 2.25 GiB Device limit : 4.00 GiB Free (according to CUDA): 0 bytes PyTorch limit (set by user-supplied memory fraction) : 17179869184.00 GiB

Prompt executed in 38.29 seconds

the second workflow (512x512) which ended up with error

got prompt [AnimateDiff] - INFO - Injecting motion module with method default. Requested to load BaseModel Loading 1 new model loading in lowvram mode 1041.7423362731934 [AnimateDiff] - INFO - Ejecting motion module with method default. ERROR:root:!!! Exception during processing !!! ERROR:root:Traceback (most recent call last): File "E:\comfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\comfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\comfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list results.append(getattr(obj, func)(*slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-animatediff\animatediff\sampler.py", line 295, in animatediff_sample return super().sample( ^^^^^^^^^^^^^^^ File "E:\comfyUI\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1237, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\comfyUI\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1207, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 93, in sample real_model, positive_copy, negative_copy, noise_mask, models = prepare_sampling(model, noise.shape, positive, negative, noise_mask) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 86, in prepare_sampling comfy.model_management.load_models_gpu([model] + models, comfy.model_management.batch_area_memory(noise_shape[0] noise_shape[2] noise_shape[3]) + inference_memory) File "E:\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 406, in load_models_gpu cur_loaded_model = loaded_model.model_load(lowvram_model_memory) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 293, in model_load device_map = accelerate.infer_auto_device_map(self.real_model, max_memory={0: "{}MiB".format(lowvram_model_memory // (1024 1024)), "cpu": "16GiB"}) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\comfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\accelerate\utils\modeling.py", line 1033, in infer_auto_device_map tied_moduleindex = [i for i, (n, ) in enumerate(modules_to_treat) if n in tied_param][0]


IndexError: list index out of range

Prompt executed in 1.18 seconds

#the txt2img workflow

got prompt
Requested to load BaseModel
Loading 1 new model
loading in lowvram mode 1041.7423362731934
10/10 [00:04<00:00,  2.45it/s]
Prompt executed in 6.80 seconds

#the 512x512 animatediff workflow (worked successfully)
got prompt
[AnimateDiff] - INFO - Injecting motion module with method default.
Requested to load BaseModel
Loading 1 new model
unload clone 1
WARNING:accelerate.big_modeling:You shouldn't move a model when it is dispatched on multiple devices.
WARNING:accelerate.big_modeling:You shouldn't move a model when it is dispatched on multiple devices.
 20/20 [04:34<00:00, 13.70s/it]
[AnimateDiff] - INFO - Ejecting motion module with method default.
Prompt executed in 284.92 seconds

#the same 512x512 workflow (error)

got prompt
WARNING:accelerate.big_modeling:You shouldn't move a model when it is dispatched on multiple devices.
[AnimateDiff] - INFO - Injecting motion module with method default.
Requested to load BaseModel
Loading 1 new model
loading in lowvram mode 1035.5884895324707
[AnimateDiff] - INFO - Ejecting motion module with method default.
ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
  File "E:\comfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\comfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\comfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-animatediff\animatediff\sampler.py", line 295, in animatediff_sample
    return super().sample(
           ^^^^^^^^^^^^^^^
  File "E:\comfyUI\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1237, in sample
    return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\comfyUI\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1207, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 93, in sample
    real_model, positive_copy, negative_copy, noise_mask, models = prepare_sampling(model, noise.shape, positive, negative, noise_mask)
                                                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 86, in prepare_sampling
    comfy.model_management.load_models_gpu([model] + models, comfy.model_management.batch_area_memory(noise_shape[0] * noise_shape[2] * noise_shape[3]) + inference_memory)
  File "E:\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 406, in load_models_gpu
    cur_loaded_model = loaded_model.model_load(lowvram_model_memory)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\comfyUI\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 293, in model_load
    device_map = accelerate.infer_auto_device_map(self.real_model, max_memory={0: "{}MiB".format(lowvram_model_memory // (1024 * 1024)), "cpu": "16GiB"})
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\comfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\accelerate\utils\modeling.py", line 1033, in infer_auto_device_map
    tied_module_index = [i for i, (n, _) in enumerate(modules_to_treat) if n in tied_param][0]
                        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^
IndexError: list index out of range

Prompt executed in 1.20 seconds