Kosinkadink / ComfyUI-AnimateDiff-Evolved

Improved AnimateDiff for ComfyUI and Advanced Sampling Support
Apache License 2.0
2.56k stars 192 forks source link

Mac Os bug - black images [current workaround included in comment - Ctrl+F 'unlimited_area_hack' to find it] #48

Open dimmm7 opened 11 months ago

dimmm7 commented 11 months ago

Now that I have installed Python 10 in a new venv, everything looks normal and Comfyui UI load the nodes correctly. The default preset (the simplest one) works....until a batch of 8 frames max... That is already great! But as soon as the batch number is above 10, I can see the KSampler starting to calculate tthe first frame, but it immediately goes black and the generated frames are all black. If I start again with 8 frames, everything back to normal. I put back 10 frames and over and the frames come out black. Spended all day trying to make it work. Cleaned installed Comfyui and new dependencies, VenV with Python 10, unistalled et installed again Animatediffevolution many times. Nothing works. What can I try now? Nothing special in the console, no error, just that I am missing FFmpeg. I guess it has nothing to do with it as that only concerns the last node. I Tried all the different presets provided. Another bug is that on another more complex preset (with two Ksamplers) as soon as it goes to second sampler, Comfyui breaks and i have to relaunch the terminal.

Here is the terminal messages with only one custom node...ComfyUI-AnimateDiff-Evolved:

Last login: Sat Sep 23 19:19:16 on ttys000 michaelroulier@Mac-Studio-de-Michael ~ % cd Comfyui michaelroulier@Mac-Studio-de-Michael Comfyui % source venv/bin/activate (venv) michaelroulier@Mac-Studio-de-Michael Comfyui % ./venv/bin/python main.py Total VRAM 131072 MB, total RAM 131072 MB xformers version: 0.0.20 Set vram state to: SHARED Device: mps VAE dtype: torch.float32 Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention

Import times for custom nodes: 0.0 seconds: /Users/michaelroulier/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved

Starting server

To see the GUI go to: http://127.0.0.1:8188 [AnimateDiffEvo] - WARNING - ffmpeg could not be found. Outputs that require it have been disabled got prompt [AnimateDiffEvo] - WARNING - ffmpeg could not be found. Outputs that require it have been disabled model_type EPS adm 0 making attention of type 'vanilla' with 512 in_channels Working with z of shape (1, 4, 32, 32) = 4096 dimensions. making attention of type 'vanilla' with 512 in_channels missing {'cond_stage_model.text_projection', 'cond_stage_model.logit_scale'} left over keys: dict_keys(['alphas_cumprod', 'alphas_cumprod_prev', 'betas', 'log_one_minus_alphas_cumprod', 'posterior_log_variance_clipped', 'posterior_mean_coef1', 'posterior_mean_coef2', 'posterior_variance', 'sqrt_alphas_cumprod', 'sqrt_one_minus_alphas_cumprod', 'sqrt_recip_alphas_cumprod', 'sqrt_recipm1_alphas_cumprod']) [AnimateDiffEvo] - INFO - Loading motion module mm-Stabilized_high.pth loading new [AnimateDiffEvo] - INFO - Regular AnimateDiff activated - latents passed in (12) less or equal to context_length 16. [AnimateDiffEvo] - INFO - Injecting motion module mm-Stabilized_high.pth version v1. loading new 100%|██████████████████████████████████████████████████████████| 20/20 [01:31<00:00, 4.59s/it] [AnimateDiffEvo] - INFO - Ejecting motion module mm-Stabilized_high.pth version v1. [AnimateDiffEvo] - INFO - Cleaning motion module from unet. /Users/michaelroulier/ComfyUI/comfy/model_base.py:47: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requiresgrad(True), rather than torch.tensor(sourceTensor). self.register_buffer('betas', torch.tensor(betas, dtype=torch.float32)) /Users/michaelroulier/ComfyUI/comfy/model_base.py:48: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requiresgrad(True), rather than torch.tensor(sourceTensor). self.register_buffer('alphas_cumprod', torch.tensor(alphas_cumprod, dtype=torch.float32)) making attention of type 'vanilla' with 512 in_channels Working with z of shape (1, 4, 32, 32) = 4096 dimensions. making attention of type 'vanilla' with 512 in_channels [AnimateDiffEvo] - WARNING - ffmpeg could not be found. Outputs that require it have been disabled Prompt executed in 98.54 seconds

Kosinkadink commented 11 months ago

Can you send me a picture of your workflow, as well as the list of custom_nodes you have loaded? Thanks!

Kosinkadink commented 11 months ago

Random hail mary - try disabling the comfyui manager, and git clone the AnimateDiff-Evolved repo from scratch

dimmm7 commented 11 months ago

Thank You. Unfortunately I got the same black frames after I did what you asked: Re-installed the custom node with Git clone. Deactivated all the others custom nodes (so comfy manager too). And I got the same symptom: 8 frames it works, 16 frames I get black images.

Capture d’écran 2023-09-24 à 04 28 23 Capture d’écran 2023-09-24 à 04 30 33

Here is the whole process of the two animations in the terminal;

Last login: Sun Sep 24 04:21:04 on ttys000 michaelroulier@Mac-Studio-de-Michael ~ % cd Comfyui michaelroulier@Mac-Studio-de-Michael Comfyui % source venv/bin/activate (venv) michaelroulier@Mac-Studio-de-Michael Comfyui % ./venv/bin/python main.py Total VRAM 131072 MB, total RAM 131072 MB xformers version: 0.0.20 Set vram state to: SHARED Device: mps VAE dtype: torch.float32 Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention

Import times for custom nodes: 0.0 seconds: /Users/michaelroulier/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved

Starting server

To see the GUI go to: http://127.0.0.1:8188 [AnimateDiffEvo] - WARNING - ffmpeg could not be found. Outputs that require it have been disabled [AnimateDiffEvo] - WARNING - ffmpeg could not be found. Outputs that require it have been disabled [AnimateDiffEvo] - WARNING - ffmpeg could not be found. Outputs that require it have been disabled got prompt [AnimateDiffEvo] - WARNING - ffmpeg could not be found. Outputs that require it have been disabled model_type EPS adm 0 making attention of type 'vanilla' with 512 in_channels Working with z of shape (1, 4, 32, 32) = 4096 dimensions. making attention of type 'vanilla' with 512 in_channels missing {'cond_stage_model.logit_scale', 'cond_stage_model.text_projection'} left over keys: dict_keys(['alphas_cumprod', 'alphas_cumprod_prev', 'betas', 'log_one_minus_alphas_cumprod', 'posterior_log_variance_clipped', 'posterior_mean_coef1', 'posterior_mean_coef2', 'posterior_variance', 'sqrt_alphas_cumprod', 'sqrt_one_minus_alphas_cumprod', 'sqrt_recip_alphas_cumprod', 'sqrt_recipm1_alphas_cumprod']) [AnimateDiffEvo] - INFO - Loading motion module mm_sd_v14.ckpt loading new [AnimateDiffEvo] - INFO - Regular AnimateDiff activated - latents passed in (16) less or equal to context_length None. [AnimateDiffEvo] - INFO - Injecting motion module mm_sd_v14.ckpt version v1. loading new 100%|███████████████████████████████████████████| 20/20 [02:08<00:00, 6.44s/it] [AnimateDiffEvo] - INFO - Ejecting motion module mm_sd_v14.ckpt version v1. [AnimateDiffEvo] - INFO - Cleaning motion module from unet. /Users/michaelroulier/ComfyUI/comfy/model_base.py:47: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requiresgrad(True), rather than torch.tensor(sourceTensor). self.register_buffer('betas', torch.tensor(betas, dtype=torch.float32)) /Users/michaelroulier/ComfyUI/comfy/model_base.py:48: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requiresgrad(True), rather than torch.tensor(sourceTensor). self.register_buffer('alphas_cumprod', torch.tensor(alphas_cumprod, dtype=torch.float32)) making attention of type 'vanilla' with 512 in_channels Working with z of shape (1, 4, 32, 32) = 4096 dimensions. making attention of type 'vanilla' with 512 in_channels [AnimateDiffEvo] - WARNING - ffmpeg could not be found. Outputs that require it have been disabled Prompt executed in 137.00 seconds got prompt [AnimateDiffEvo] - WARNING - ffmpeg could not be found. Outputs that require it have been disabled 2 3 [AnimateDiffEvo] - INFO - Regular AnimateDiff activated - latents passed in (8) less or equal to context_length None. [AnimateDiffEvo] - INFO - Injecting motion module mm_sd_v14.ckpt version v1. loading new 100%|███████████████████████████████████████████| 20/20 [01:02<00:00, 3.12s/it] [AnimateDiffEvo] - INFO - Ejecting motion module mm_sd_v14.ckpt version v1. [AnimateDiffEvo] - INFO - Cleaning motion module from unet. [AnimateDiffEvo] - WARNING - ffmpeg could not be found. Outputs that require it have been disabled Prompt executed in 65.38 seconds

dimmm7 commented 11 months ago

I think Akak Pixel (three days ago) has the same thing with black frames and he looks from his list to be on mac. I am working with Python 10 in the venv by the way.

dimmm7 commented 11 months ago

here again all the terminal when trying to produce 3 sequences. The 1 and 3 generation (16 frames) didn't work. The other one went fine (in the middle of the two).

Last login: Sun Sep 24 04:21:04 on ttys000 michaelroulier@Mac-Studio-de-Michael ~ % cd Comfyui michaelroulier@Mac-Studio-de-Michael Comfyui % source venv/bin/activate (venv) michaelroulier@Mac-Studio-de-Michael Comfyui % ./venv/bin/python main.py Total VRAM 131072 MB, total RAM 131072 MB xformers version: 0.0.20 Set vram state to: SHARED Device: mps VAE dtype: torch.float32 Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention

Import times for custom nodes: 0.0 seconds: /Users/michaelroulier/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved

Starting server

To see the GUI go to: http://127.0.0.1:8188 [AnimateDiffEvo] - WARNING - ffmpeg could not be found. Outputs that require it have been disabled [AnimateDiffEvo] - WARNING - ffmpeg could not be found. Outputs that require it have been disabled [AnimateDiffEvo] - WARNING - ffmpeg could not be found. Outputs that require it have been disabled got prompt [AnimateDiffEvo] - WARNING - ffmpeg could not be found. Outputs that require it have been disabled model_type EPS adm 0 making attention of type 'vanilla' with 512 in_channels Working with z of shape (1, 4, 32, 32) = 4096 dimensions. making attention of type 'vanilla' with 512 in_channels missing {'cond_stage_model.logit_scale', 'cond_stage_model.text_projection'} left over keys: dict_keys(['alphas_cumprod', 'alphas_cumprod_prev', 'betas', 'log_one_minus_alphas_cumprod', 'posterior_log_variance_clipped', 'posterior_mean_coef1', 'posterior_mean_coef2', 'posterior_variance', 'sqrt_alphas_cumprod', 'sqrt_one_minus_alphas_cumprod', 'sqrt_recip_alphas_cumprod', 'sqrt_recipm1_alphas_cumprod'])

[AnimateDiffEvo] - INFO - Loading motion module mm_sd_v14.ckpt loading new [AnimateDiffEvo] - INFO - Regular AnimateDiff activated - latents passed in (16) less or equal to context_length None. [AnimateDiffEvo] - INFO - Injecting motion module mm_sd_v14.ckpt version v1. loading new 100%|███████████████████████████████████████████| 20/20 [02:08<00:00, 6.44s/it] [AnimateDiffEvo] - INFO - Ejecting motion module mm_sd_v14.ckpt version v1. [AnimateDiffEvo] - INFO - Cleaning motion module from unet. /Users/michaelroulier/ComfyUI/comfy/model_base.py:47: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requiresgrad(True), rather than torch.tensor(sourceTensor). self.register_buffer('betas', torch.tensor(betas, dtype=torch.float32)) /Users/michaelroulier/ComfyUI/comfy/model_base.py:48: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requiresgrad(True), rather than torch.tensor(sourceTensor). self.register_buffer('alphas_cumprod', torch.tensor(alphas_cumprod, dtype=torch.float32)) making attention of type 'vanilla' with 512 in_channels Working with z of shape (1, 4, 32, 32) = 4096 dimensions. making attention of type 'vanilla' with 512 in_channels [AnimateDiffEvo] - WARNING - ffmpeg could not be found. Outputs that require it have been disabled Prompt executed in 137.00 seconds

got prompt [AnimateDiffEvo] - WARNING - ffmpeg could not be found. Outputs that require it have been disabled 2 3 [AnimateDiffEvo] - INFO - Regular AnimateDiff activated - latents passed in (8) less or equal to context_length None. [AnimateDiffEvo] - INFO - Injecting motion module mm_sd_v14.ckpt version v1. loading new 100%|███████████████████████████████████████████| 20/20 [01:02<00:00, 3.12s/it] [AnimateDiffEvo] - INFO - Ejecting motion module mm_sd_v14.ckpt version v1. [AnimateDiffEvo] - INFO - Cleaning motion module from unet. [AnimateDiffEvo] - WARNING - ffmpeg could not be found. Outputs that require it have been disabled Prompt executed in 65.38 seconds

got prompt [AnimateDiffEvo] - WARNING - ffmpeg could not be found. Outputs that require it have been disabled 2 3 [AnimateDiffEvo] - INFO - Regular AnimateDiff activated - latents passed in (16) less or equal to context_length None. [AnimateDiffEvo] - INFO - Injecting motion module mm_sd_v14.ckpt version v1. loading new 100%|███████████████████████████████████████████| 20/20 [02:10<00:00, 6.52s/it] [AnimateDiffEvo] - INFO - Ejecting motion module mm_sd_v14.ckpt version v1. [AnimateDiffEvo] - INFO - Cleaning motion module from unet. [AnimateDiffEvo] - WARNING - ffmpeg could not be found. Outputs that require it have been disabled Prompt executed in 134.97 seconds

Kosinkadink commented 11 months ago

Hey, yep, looks like you both have the same issue. His issue is before my big refactor on Friday, so looks like that has always been present. With black images outputting, it makes me thing that either some tensors are becoming NaNs, or there is an issue with VAE decoding. Not sure if this helps since you might have used this guide yourself, but I would make sure the comfy venv using the latest pytorch nightly. Image is screenshot from ComfyUI readme: https://github.com/comfyanonymous/ComfyUI#apple-mac-silicon

image

It might also be good to test how many latents you can batch before the black frame issue starts happening. And when you find that limit, go 1 batch size below, and increase the resolution to see if the issue is resolution related as well as batch related.

dimmm7 commented 11 months ago

I have not tried the latest nightly Pytorch yet, and I'll do today. Anything to make things work. All the rest you say I have done. If I lower the resolution (300px for example) I can make a higher batch of 16. At 512 I can do a batch of 8, more than that I get black frames. At 300px I can do a batch of 16, but that res gets too small. So it looks it's the issue is related to resolution. I ll try the nightly Pytorch with the argument you suggest, and see how things go. Thank you for your help.

dimmm7 commented 11 months ago

ok, I installed the nightly Pytorch (the --force-fp16 though breaks comfyui)So I tried again without the fp16 and the results are the same. At 512 black frames. At 300 at even get 48 frames. But the res is so low that it looks quite unusable unfortunately... All this is strange because I use a lot of other configs with no problem and very fast iterations (1.20 per frames). Probably a mac thing but so frustrating.

dimmm7 commented 11 months ago

Kosinkadink, sorry to bother again. Do you think it's a Mac Osx problem? I have M1 ultra super boosted. Do other Mac users have the same problem? I can't do more testing myself, but I can under your guidance. It's a pity mac users can't use it and the mac community is the most creative. I am using Deforum like nodes, but I am a bit fed up with inconsistency between frames, I think i did everything that was possible, clean installs, new nightly Pytorch , no other custom nodes activated, etc... It's definitely a resolution problem. But why?

akak-pixel commented 11 months ago

I have a M1 mac too and I'm facing a similar issue. I just get lack frames no matter what resolution or samples.

dimmm7 commented 11 months ago

I have a M1 mac too and I'm facing a similar issue. I just get black frames no matter what resolution or samples.

I wonder if all the Macs face the same issue.?

Kosinkadink commented 11 months ago

I'm going to talk to some folks to see if people with Macs have been able to make it work, and get back to you guys.

dimmm7 commented 11 months ago

Thank you!

dimmm7 commented 11 months ago

I guess I'll have to find a PC with Windows...I am sure you are super busy. But is there any hope we could make it work on a Mac M1?

simonjaq commented 11 months ago

Same here. Anything more than 4 frames produces black output. Most recent Pytorch nightly and M1 Ultra with 64G.

Kosinkadink commented 11 months ago

I do not own a mac, but it sounds like this is some issue with the pytorch code on macs. I do not really see any other reason why it would crap out like this specifically on M1 macs at a certain threshold of pixels in a batch. Or maybe a VAE issue in some way that is related.

We're gonna need to play a bit of a game of telephone to get to the core of the issue. Pytorch on mac can have some real wacky bugs, like sometimes the *= operator for tensors just straight up does nothing. But that's going to deep, here is the game plan:

1) Everyone on this thread, post your exact Mac specs - anything you think is relevant, include it so that we can get that in one shot. 2) Without AD, try to generate at the biggest batch size you can at 512x512/other resolutions to confirm that it's not a problem without AD/if there is a limit without AD. 3) With the simple txt2img workflow in the README, report the exact batch size limit at 512x512 that it can generate without running into the black image issue. Then repeat for the limit of 256x256, and 128x128

The goal is to see if there are any differences in the limit given the various setups of your Macs. If your hardware has some differences but still the limits are exactly the same, then there must be some Mac pytorch bug that we can hopefully report and find a workaround for while they officially fix it.

Once I see a few of your results to confirm, I can make a separate mac-testing branch (or a few different mac-testing branches) with subtly different code and print statements that still does the same thing to see where exactly things go wrong on macs. Your help would be greatly appreciated!

simonjaq commented 11 months ago

Hi. I will do the tests as suggested. Just in the meantime: ComfyUI also totally crashes sometimes with the following error in terminal: loading new

  5%|5         | 1/20 [00:03<01:01,  3.24s/it]/AppleInternal/Library/BuildRoots/9941690d-bcf7-11ed-a645-863efbbaf80d/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Types/MPSNDArray.mm:706: failed assertion `[MPSTemporaryNDArray initWithDevice:descriptor:] Error: NDArray dimension length > INT_MAX'
Abort trap: 6
(ComfyUI) Simons-Retina-MacBook-Pro:ComfyUI simon$ /opt/homebrew/Cellar/python@3.11/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
  warnings.warn('resource_tracker: There appear to be %d '

Specs: Apple M1 Max 64GB Mac Os 13.3.1 Pytorch just freshly installed in a new env. 512x512 batch: 4

nathanshipley commented 11 months ago

I tried installing this on an M1 Ultra 128 GB with the same OS as @simonjaq in a fresh environment, nightly pytorch.

I can successfully generate single images and I can run the basic txt2img workflow at 256x256 and get a 16 frame GIF, but at 512x512, it finishes the progress bar then I get the same error.

I notice the python process jumps to about 56 GB of RAM used at 512x512. It's about 8 GB for the 256x256 image. Batch size 16.

Screenshot 2023-10-05 at 6 09 11 PM

The full output from session launch to crash is like this:

(comfy) nathan % python main.py --force-fp16
** ComfyUI start up time: 2023-10-05 17:52:42.539168

Prestartup times for custom nodes:
   0.0 seconds: /Users/nathan/code/09_ComfyUI/ComfyUI/custom_nodes/ComfyUI-Manager

Total VRAM 131072 MB, total RAM 131072 MB
Forcing FP16.
Set vram state to: SHARED
Device: mps
VAE dtype: torch.float32
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention
### Loading: ComfyUI-Manager (V0.33)
### ComfyUI Revision: 1525 [48242be5] | Released on '2023-10-05'
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention

Import times for custom nodes:
   0.0 seconds: /Users/nathan/code/09_ComfyUI/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved
   0.1 seconds: /Users/nathan/code/09_ComfyUI/ComfyUI/custom_nodes/ComfyUI-Manager
   6.0 seconds: /Users/nathan/code/09_ComfyUI/ComfyUI/custom_nodes/ComfyUI-VideoHelperSuite

Starting server

To see the GUI go to: http://127.0.0.1:8188
FETCH DATA from: /Users/nathan/code/09_ComfyUI/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json
got prompt
model_type EPS
adm 0
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
missing {'cond_stage_model.logit_scale', 'cond_stage_model.text_projection'}
left over keys: dict_keys(['cond_stage_model.transformer.text_model.embeddings.position_ids', 'model_ema.decay', 'model_ema.num_updates'])
[AnimateDiffEvo] - INFO - Loading motion module mm_sd_v15_v2.ckpt
[AnimateDiffEvo] - INFO - Using fp16, converting motion module to fp16
loading new
[AnimateDiffEvo] - INFO - Regular AnimateDiff activated - latents passed in (16) less or equal to context_length None.
[AnimateDiffEvo] - INFO - Injecting motion module mm_sd_v15_v2.ckpt version v2.
loading new
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [02:39<00:00,  8.00s/it]
[AnimateDiffEvo] - INFO - Ejecting motion module mm_sd_v15_v2.ckpt version v2.
[AnimateDiffEvo] - INFO - Cleaning motion module from unet.
[AnimateDiffEvo] - INFO - Removing motion module mm_sd_v15_v2.ckpt from cache
/Users/nathan/code/09_ComfyUI/ComfyUI/comfy/model_base.py:47: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  self.register_buffer('betas', torch.tensor(betas, dtype=torch.float32))
/Users/nathan/code/09_ComfyUI/ComfyUI/comfy/model_base.py:48: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  self.register_buffer('alphas_cumprod', torch.tensor(alphas_cumprod, dtype=torch.float32))
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
/AppleInternal/Library/BuildRoots/9941690d-bcf7-11ed-a645-863efbbaf80d/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Types/MPSNDArray.mm:706: failed assertion `[MPSTemporaryNDArray initWithDevice:descriptor:] Error: NDArray dimension length > INT_MAX'
zsh: abort      python main.py --force-fp16
(comfy) nathan@Woohoo-Studio ComfyUI % /Users/nathan/miniconda3/envs/comfy/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
  warnings.warn('resource_tracker: There appear to be %d '
Kosinkadink commented 11 months ago

Good news, I've diagnosed the issue and why it happens - there is indeed a bug with Mac pytorch, which I will try to report soon so hopefully we can get an official fix in the nightly build.

It does not happen where I think it did though, and only happens if ComfyUI optimization kicks in. It's not the optimization's fault for the bug, but the TL;DR is Mac pytorch for some reason goes haywire with the shape of a tensor it has for a specific operation (group_norm). The ComfyUI optimization DOES kick in much sooner than it should - it should hypothetically only kick in when you are expected to run out of memory, but some of you guys have like 128GB of RAM (which on the M1 is VRAM too) and it kicks in at 512x512 resolution batch size 16, which should only take 8GB VRAM. I will ask comfy if he can improve the memory calculation on M1 macs (if possible), so that the optimization will kick in at a proper time and we can push the can down the road on the actual Mac pytorch bug.

Either way, I will soon either expose a config file to enable the workaround that prevents that specific ComfyUI optimization from ever kicking in - it will mean that in cases where you might be actually low on VRAM, it won't optimize it to only use like half the VRAM (the optimization is really good), but it does mean you can get AnimateDiff working no problem.

If you want to test it for yourself, all you have to do is go to the ComfyUI-AnimateDiff-Evolved/animatediff/nodes.py file, and then find AnimateDiffLoaderWithContext, and look for unlimited_area_hack - and set it to True. This is what my config file and/or workaround will do in a soon-to-be-released update, so this will let you use it rn. image

nathanshipley commented 11 months ago

Tried setting unlimited_area_hack to True in the AnimateDiffLoaderWithContext class and I still get the same crash at 512x512 on the 128 GB M1 Ultra. Same ~56 GB of RAM usage.

Perhaps this is a different issue than the original?

Here's that crash:

100%|██████████████████████████| 20/20 [02:37<00:00,  7.89s/it]
[AnimateDiffEvo] - INFO - Ejecting motion module mm_sd_v15_v2.ckpt version v2.
[AnimateDiffEvo] - INFO - Cleaning motion module from unet.
[AnimateDiffEvo] - INFO - Removing motion module mm_sd_v15_v2.ckpt from cache
/AppleInternal/Library/BuildRoots/9941690d-bcf7-11ed-a645-863efbbaf80d/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Types/MPSNDArray.mm:706: failed assertion `[MPSTemporaryNDArray initWithDevice:descriptor:] Error: NDArray dimension length > INT_MAX'
zsh: abort      python main.py --force-fp16
(comfy) nathan@Woohoo-Studio ComfyUI % /Users/nathan/miniconda3/envs/comfy/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
  warnings.warn('resource_tracker: There appear to be %d '

Not sure if it's relevant or related, but there was also a CUDA / nvcc error I saw when I first installed AniamteDiff-Evolved. Pasting terminal output here:

To see the GUI go to: http://127.0.0.1:8188
FETCH DATA from: /Users/nathan/code/09_ComfyUI/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json
FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
Install custom node 'AnimateDiff (Kosinkadink version)'
install: ['https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved']
Download: git clone 'https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved'
Installation was successful.

## ComfyUI-Manager: EXECUTE => ['/Users/nathan/miniconda3/envs/comfy/bin/python', '-m', 'pip', 'install', 'flash_attn']
 Collecting flash_attn
   Downloading flash_attn-2.3.1.post1.tar.gz (2.3 MB)
      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.3/2.3 MB 21.5 MB/s eta 0:00:00
   Preparing metadata (setup.py): started
   Preparing metadata (setup.py): finished with status 'error'
[!]   error: subprocess-exited-with-error
[!]   
[!]   × python setup.py egg_info did not run successfully.
[!]   │ exit code: 1
[!]   ╰─> [20 lines of output]
[!]       fatal: not a git repository (or any of the parent directories): .git
[!]       /private/var/folders/sr/25b3dd2s08lc6j0yqxyttb3w0000gn/T/pip-install-axrmvhk6/flash-attn_ef4069aff0344e6eb6ac9a013f539817/setup.py:79: UserWarning: flash_attn was requested, but nvcc was not found.  Are you sure your environment has nvcc available?  If you're installing within a container from https://hub.docker.com/r/pytorch/pytorch, only images whose names contain 'devel' will provide nvcc.
[!]         warnings.warn(
[!]       Traceback (most recent call last):
[!]         File "<string>", line 2, in <module>
[!]         File "<pip-setuptools-caller>", line 34, in <module>
[!]         File "/private/var/folders/sr/25b3dd2s08lc6j0yqxyttb3w0000gn/T/pip-install-axrmvhk6/flash-attn_ef4069aff0344e6eb6ac9a013f539817/setup.py", line 136, in <module>
[!]           CUDAExtension(
[!]         File "/Users/nathan/miniconda3/envs/comfy/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1076, in CUDAExtension
[!]           library_dirs += library_paths(cuda=True)
[!]         File "/Users/nathan/miniconda3/envs/comfy/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1203, in library_paths
[!]           if (not os.path.exists(_join_cuda_home(lib_dir)) and
[!]         File "/Users/nathan/miniconda3/envs/comfy/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 2415, in _join_cuda_home
[!]           raise OSError('CUDA_HOME environment variable is not set. '
[!]       OSError: CUDA_HOME environment variable is not set. Please set it to your CUDA install root.
[!]       
[!]       
[!]       torch.__version__  = 2.2.0.dev20231005
[!]       
[!]       
[!]       [end of output]
[!]   
[!]   note: This error originates from a subprocess, and is likely not a problem with pip.
[!] error: metadata-generation-failed
[!] 
[!] × Encountered error while generating package metadata.
[!] ╰─> See above for output.
[!] 
[!] note: This is an issue with the package mentioned above, not pip.
[!] hint: See above for details.
install script failed: https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved
After restarting ComfyUI, please refresh the browser.
FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
^C
Stopped server
Kosinkadink commented 11 months ago

@nathanshipley I think your issue may be different than the ones other had. AnimateDiff-Evolved has no external dependencies aside from ComfyUI, so not sure why ComfyManager is trying to install things.

simonjaq commented 11 months ago

I managed to generate with 512x512 12 frames. Memory consumption is insane almost maxing out my 64GB M1 Ultra going up to 60GB for Comfy but it works.

tonytitani commented 11 months ago

I've tried reinstalling ComyUi and AnimateDiff multiple times but when I try to load the sample set-up workflow, I am always getting this error: When loading the graph, the following node types were not found: ADE_AnimateDiffLoaderWithContext

I try 'installing missing nodes' and it shows that it is already installed but I just am not able to use the animateDiff Loader for some reason. On a Mac Studio M2. Otherwise ComfyUI seems to be working (Through Pinokio)

Screenshot 2023-10-07 at 8 12 58 PM
Kosinkadink commented 11 months ago

Look in the console, it will let you know went wrong when it tried to initialize the nodes.

tonytitani commented 11 months ago

Appreciate your response - these were the errors I was getting from console - could they be related to issue?

rSaita commented 11 months ago

Hello, I have a M2 PRO Mac and I only get black image animations, although ComfyUI generates images just fine. Here I am pasting the Terminal message after trying to generate an animation of 256x256 that resulted in a black image clip (hope that helps):

got prompt 2 model_type EPS adm 0 making attention of type 'vanilla' with 512 in_channels Working with z of shape (1, 4, 32, 32) = 4096 dimensions. making attention of type 'vanilla' with 512 in_channels missing {'cond_stage_model.text_projection', 'cond_stage_model.logit_scale'} left over keys: dict_keys(['alphas_cumprod', 'alphas_cumprod_prev', 'betas', 'cond_stage_model.transformer.text_model.embeddings.position_ids', 'log_one_minus_alphas_cumprod', 'model_ema.decay', 'model_ema.num_updates', 'posterior_log_variance_clipped', 'posterior_mean_coef1', 'posterior_mean_coef2', 'posterior_variance', 'sqrt_alphas_cumprod', 'sqrt_one_minus_alphas_cumprod', 'sqrt_recip_alphas_cumprod', 'sqrt_recipm1_alphas_cumprod']) [AnimateDiffEvo] - INFO - Loading motion module mm_sd_v15_v2.ckpt [AnimateDiffEvo] - INFO - Using fp16, converting motion module to fp16 loading new [AnimateDiffEvo] - INFO - Regular AnimateDiff activated - latents passed in (16) less or equal to context_length None.[AnimateDiffEvo] - INFO - Injecting motion module mm_sd_v15_v2.ckpt version v2. loading new  90%|███████████████████████████████████████████████████████████████████████████▌        | 18/20 [01:47<00:11,  5.92s/it]/opt/homebrew/lib/python3.11/site-packages/torchsde/_brownian/brownian_interval.py:599: UserWarning: Should have ta>=t0 but got ta=0.029167519882321358 and t0=0.029168.  warnings.warn(f"Should have ta>=t0 but got ta={ta} and t0={self._start}.") 100%|████████████████████████████████████████████████████████████████████████████████████| 20/20 [01:56<00:00,  5.84s/it] [AnimateDiffEvo] - INFO - Ejecting motion module mm_sd_v15_v2.ckpt version v2. [AnimateDiffEvo] - INFO - Cleaning motion module from unet. [AnimateDiffEvo] - INFO - Removing motion module mm_sd_v15_v2.ckpt from cache Prompt executed in 127.48 seconds

Please advise. Thanks

Kosinkadink commented 11 months ago

@rSaita Have you attempted using the current workaround of setting unlimited_area_hack to True in the code mentioned in this thread above? https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/issues/48#issuecomment-1750156332

Looks like the bug affects M1 and M2 macs, since they both would use the same build of mac pytorch, Let me know if that workaround works for you!

rSaita commented 11 months ago

@Kosinkadink I just did the test of setting unlimited_area_hack to True, and it generated a correct animation using a resolution of 256x256, but after that, when I tried to generate a 512x512 clip, it gave me these Terminal and onscreen messages that I've attached.

Terminal.pdf OnScreen Error.pdf

Kosinkadink commented 11 months ago

@Kosinkadink I just did the test of setting unlimited_area_hack to True, and it generated a correct animation using a resolution of 256x256, but after that, when I tried to generate a 512x512 clip, it gave me these Terminal and onscreen messages that I've attached.

Terminal.pdf OnScreen Error.pdf

Looks like you may not have enough VRAM/RAM to run AnimateDiff while pytorch bug prevents the use of Comfy optimizations (comfy VRAM optimizations are not allowed to run when unlimited_area_hack is true). M1/M2 macs use the same chips for VRAM/RAM usage, and the RAM requirements are likely preventing enough free space for VRAM requirements. With no VRAM optimizations , 512x512 batch size 16 with fp16 takes ~8GB VRAM, so that would mean your 18GB Mac would need to use less than 10GB RAM/VRAM for anything else running your Mac, comfy included.

rSaita commented 11 months ago

@Kosinkadink Ok, I see, thank you for the explanation. So, will you be contacting the Pytorch developers in order for them to fix that bug? Or is there another solution?

Mackay031 commented 10 months ago

@Kosinkadink forgot to thank you for the above unlimited_area_hack=True, workaround. Worked for me. Also, running python main.py --use-split-cross-attention seems kinder on my Macs GPU although I couldn't say why :)

dimmm7 commented 10 months ago

@MacKay for how many frames and at what resolution? Because it really didn’t work for me, doing all the optimizations Konsinkadin was asking for. And I tried on three different Mac M1/M2 and different Is too. It worked great though on the Windows of a friend.

dimmm7 commented 10 months ago

Oh, and what is a Mac GPU? Do you mean an M1/2 or an older Mac? if so what GPU was it? Only very old Mac’s can use NVIDIA? Frankly I don’t understand

aianimation55 commented 10 months ago

Just wanted to comment to say the temporary fix did work for me on my Macbook (M1 Max - 32Gb) and stopped the black frame issue.

On my new M2 Ultra (128gb) however, the black frames issue didn't happen. Or at least infrequently.

However, on both machines, which can generate high quality stills without issue in ComfyUi. When I connect Animatediffuse. The first attempt at generating is generally ok. Then after 2-3 generations, it gets steadily worse and much more blocky, as if a 'cubist art style' prompt is being applied. I think it's a case of VRAM filling up, or a memory leak somewhere.

Restarting ComfyUI doesn't solve the issue. The only way seems to be to restart the computer.

*Also, and this may have been a coincidence (testing takes a while, particularly with a possible memory issue) I seemed to be getting less blocky generations out of mm_sd_v14 compared to mm_sd_v15 and mm_sd_v15_v2 as the model applied to the Animatediffuse node.

Massive thanks to the developer (developers?) working on this. It's awesome.

Though I do wonder if I pause learning much more and jump back to Automatic1111 in the mean time, as I don't think an Nvidia PC is arriving here any time soon :-D

Kosinkadink commented 10 months ago

@aianimation55 It sounds like the motion modules are not getting ejected properly, which should never happen. Can you give me a list of the folders in your custom_nodes folder in ComfyUI?

Also, please send a screenshot of the workflow you are trying to run.

aianimation55 commented 10 months ago

Thanks Kosinkadink.

Screenshot 2023-10-13 at 10 26 58

Folders in my custom_nodes folder:

ComfyUI-AnimateDiff-Evolved ComfyUI-Manager ComfyUI-VideoHelperSuite

Kosinkadink commented 10 months ago

Your context_length needs to be around 16, and you need to have at least around 16 latents passed in. The motion/images produced by the motion module are also dependent on the frames they are trying to process at a time. The sweetspot for AnimateDiff (not HotshotXL) is 16 frames at a time. 5 frames will make everything deepfried. Also, for AnimateDiff, it's recommened to use the sqrt_linear beta_schedule. linear will produce washed out results for AnimateDiff motion modules, which could be artsy, but just a heads up.

I think that's the issue you have, and the code is working fine.

aianimation55 commented 10 months ago

Thanks @Kosinkadink . That's great, thankyou for the extra direction. I'd tried ALOT of variables. Plus yep, saw your note about sqrt_linear_beta_schedule. I'd just been trying out the others just incase that helped fix my issue.

I'll jump back on this over the weekend. Cheers.

aianimation55 commented 10 months ago

@Kosinkadink . Thanks again that worked well. Crashes if I try and go above 640 x 360 but successfully generating highquality series of frames without issue at that size. I need to explore some upscaling approaches that keep the mac happy. Or rely on Topaz AI. Cheers.

Kosinkadink commented 10 months ago

For anyone experiencing that crash, try to use --use-split-cross-attention startup argument for ComfyUI.

hellovincentlee commented 10 months ago

Kosinkadink, sorry to bother again. Do you think it's a Mac Osx problem? I have M1 ultra super boosted. Do other Mac users have the same problem? I can't do more testing myself, but I can under your guidance. It's a pity mac users can't use it and the mac community is the most creative. I am using Deforum like nodes, but I am a bit fed up with inconsistency between frames, I think i did everything that was possible, clean installs, new nightly Pytorch , no other custom nodes activated, etc... It's definitely a resolution problem. But why?

I don't think is a problem on the M1/M2, When I use the AnimateDiff work with WebUI, it's work and there is no error at all, I don't know the logic of AnimateDiff are different between webUI AND comfyUI or not. BUT I can confirm that M1 is capable to run a good result with AnimateDiff.

jiangdi0924 commented 10 months ago

M1 max 64GB device --use-split-cross-attention Only valid at 256x256 resolution, at other times, for example, 512x512 images are pure black. There are also the following prompts on the command line.

image

/Users/XXX/OpenSoucre/ComfyUI/custom_nodes/ComfyUI-VideoHelperSuite/videohelpersuite/nodes.py:106: RuntimeWarning: invalid value encountered in cast img = Image.fromarray(np.clip(img, 0, 255).astype(np.uint8)) /Users/XXX/OpenSoucre/ComfyUI/nodes.py:1300: RuntimeWarning: invalid value encountered in cast img = Image.fromarray(np.clip(i, 0, 255).astype(np.uint8)) Prompt executed in 233.84 seconds

jiangdi0924 commented 10 months ago

M1 max 64GB device --use-split-cross-attention Only valid at 256x256 resolution, at other times, for example, 512x512 images are pure black. There are also the following prompts on the command line.

image

/Users/XXX/OpenSoucre/ComfyUI/custom_nodes/ComfyUI-VideoHelperSuite/videohelpersuite/nodes.py:106: RuntimeWarning: invalid value encountered in cast img = Image.fromarray(np.clip(img, 0, 255).astype(np.uint8)) /Users/XXX/OpenSoucre/ComfyUI/nodes.py:1300: RuntimeWarning: invalid value encountered in cast img = Image.fromarray(np.clip(i, 0, 255).astype(np.uint8)) Prompt executed in 233.84 seconds

UPDATE unlimited_area_hack=True then 512 512 worked

Lichtfabrik commented 10 months ago

M2 ultra 128GB device // Had the same issue with black images.

After unlimited_area_hack=True it works (Most of the times)

Omhet commented 9 months ago

M2 Pro 16Gb Sonoma, had this issue with the black images, even with 256x256

Confirm, that unlimited_area_hack=True works. However it takes ~30 minutes to generate 512x512 16 frames animation from the basic txt2img workflow

Anyway, @Kosinkadink thank you for this beautiful tool and this workaround!

Omhet commented 9 months ago

Btw, I tried to use lcm-lora from this tutorial and the 512x512 16 frames generation improved from 30 mins to 12 mins

foglerek commented 9 months ago

Hey all,

Just wanted to add that the fix described above (unlimited_area_hack=True) works well for 512x512 and SD1.5.

However, with SDXL I continue to get black images (in the output and UI). I have tried 512x512, 1024x1024, and various SDXL models (base, turbo, custom checkpoints) along with the mm_sdxl_v10_beta and hsxl motion model.

Always with 16 context_length and 16 batch_size. Any ideas?

I'm on an M1 Max 64GB, using torch nightly, etc. SDXL works fine at batch size 16 if I bypass AnimateDiff.

iammikomaestro commented 8 months ago

@Kosinkadink please help me too. i have been reading the threads and have followed the solutions but still getting black images. I'm also using macos.

Here is my terminal: Last login: Wed Dec 20 00:46:34 on ttys001 (base) miko@Mikes-MacBook-Air stable-diffusion-webui % git pull  remote: Enumerating objects: 7, done. remote: Counting objects: 100% (7/7), done. remote: Compressing objects: 100% (5/5), done. remote: Total 7 (delta 2), reused 6 (delta 2), pack-reused 0 Unpacking objects: 100% (7/7), 3.47 KiB | 508.00 KiB/s, done. From https://github.com/AUTOMATIC1111/stable-diffusion-webui  * [new branch]        reorder-post-processing-modules -> origin/reorder-post-processing-modules Already up to date. (base) miko@Mikes-MacBook-Air stable-diffusion-webui % ./webui.sh   ################################################################ Install script for stable-diffusion + Web UI Tested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer. ################################################################   ################################################################ Running on miko user ################################################################   ################################################################ Repo already cloned, using it as install directory ################################################################   ################################################################ Create and activate python venv ################################################################   ################################################################ Launching launch.py... ################################################################ Python 3.10.6 (v3.10.6:9c7b4bd164, Aug  1 2022, 17:13:48) [Clang 13.0.0 (clang-1300.0.29.30)] Version: v1.7.0 Commit hash: cf2772fab0af5573da775e7437e6acdca424f26e Installing sd-webui-controlnet requirement: changing opencv-python version from 4.7.0.72 to 4.8.0 Requirement already satisfied: insightface==0.7.3 in ./venv/lib/python3.10/site-packages (from -r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 1)) (0.7.3) Requirement already satisfied: onnx==1.14.0 in ./venv/lib/python3.10/site-packages (from -r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 2)) (1.14.0) Requirement already satisfied: onnxruntime==1.15.0 in ./venv/lib/python3.10/site-packages (from -r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 3)) (1.15.0) Collecting opencv-python==4.7.0.72   Using cached opencv_python-4.7.0.72-cp37-abi3-macosx_11_0_arm64.whl (32.6 MB) Requirement already satisfied: ifnude in ./venv/lib/python3.10/site-packages (from -r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 5)) (0.0.3) Requirement already satisfied: cython in ./venv/lib/python3.10/site-packages (from -r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 6)) (3.0.2) Requirement already satisfied: tqdm in ./venv/lib/python3.10/site-packages (from insightface==0.7.3->-r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 1)) (4.66.1) Requirement already satisfied: matplotlib in ./venv/lib/python3.10/site-packages (from insightface==0.7.3->-r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 1)) (3.7.3) Requirement already satisfied: Pillow in ./venv/lib/python3.10/site-packages (from insightface==0.7.3->-r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 1)) (9.5.0) Requirement already satisfied: requests in ./venv/lib/python3.10/site-packages (from insightface==0.7.3->-r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 1)) (2.31.0) Requirement already satisfied: easydict in ./venv/lib/python3.10/site-packages (from insightface==0.7.3->-r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 1)) (1.10) Requirement already satisfied: albumentations in ./venv/lib/python3.10/site-packages (from insightface==0.7.3->-r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 1)) (1.3.1) Requirement already satisfied: prettytable in ./venv/lib/python3.10/site-packages (from insightface==0.7.3->-r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 1)) (3.9.0) Requirement already satisfied: scikit-learn in ./venv/lib/python3.10/site-packages (from insightface==0.7.3->-r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 1)) (1.3.1) Requirement already satisfied: numpy in ./venv/lib/python3.10/site-packages (from insightface==0.7.3->-r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 1)) (1.23.5) Requirement already satisfied: scipy in ./venv/lib/python3.10/site-packages (from insightface==0.7.3->-r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 1)) (1.11.2) Requirement already satisfied: scikit-image in ./venv/lib/python3.10/site-packages (from insightface==0.7.3->-r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 1)) (0.21.0) Requirement already satisfied: protobuf>=3.20.2 in ./venv/lib/python3.10/site-packages (from onnx==1.14.0->-r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 2)) (3.20.3) Requirement already satisfied: typing-extensions>=3.6.2.1 in ./venv/lib/python3.10/site-packages (from onnx==1.14.0->-r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 2)) (4.7.1) Requirement already satisfied: packaging in ./venv/lib/python3.10/site-packages (from onnxruntime==1.15.0->-r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 3)) (23.1) Requirement already satisfied: sympy in ./venv/lib/python3.10/site-packages (from onnxruntime==1.15.0->-r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 3)) (1.12) Requirement already satisfied: coloredlogs in ./venv/lib/python3.10/site-packages (from onnxruntime==1.15.0->-r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 3)) (15.0.1) Requirement already satisfied: flatbuffers in ./venv/lib/python3.10/site-packages (from onnxruntime==1.15.0->-r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 3)) (23.5.26) Requirement already satisfied: opencv-python-headless>=4.5.1.48 in ./venv/lib/python3.10/site-packages (from ifnude->-r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 5)) (4.8.1.78) Requirement already satisfied: qudida>=0.0.4 in ./venv/lib/python3.10/site-packages (from albumentations->insightface==0.7.3->-r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 1)) (0.0.4) Requirement already satisfied: PyYAML in ./venv/lib/python3.10/site-packages (from albumentations->insightface==0.7.3->-r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 1)) (6.0.1) Requirement already satisfied: lazy_loader>=0.2 in ./venv/lib/python3.10/site-packages (from scikit-image->insightface==0.7.3->-r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 1)) (0.3) Requirement already satisfied: PyWavelets>=1.1.1 in ./venv/lib/python3.10/site-packages (from scikit-image->insightface==0.7.3->-r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 1)) (1.4.1) Requirement already satisfied: tifffile>=2022.8.12 in ./venv/lib/python3.10/site-packages (from scikit-image->insightface==0.7.3->-r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 1)) (2023.8.30) Requirement already satisfied: imageio>=2.27 in ./venv/lib/python3.10/site-packages (from scikit-image->insightface==0.7.3->-r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 1)) (2.31.3) Requirement already satisfied: networkx>=2.8 in ./venv/lib/python3.10/site-packages (from scikit-image->insightface==0.7.3->-r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 1)) (3.1) Requirement already satisfied: humanfriendly>=9.1 in ./venv/lib/python3.10/site-packages (from coloredlogs->onnxruntime==1.15.0->-r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 3)) (10.0) Requirement already satisfied: kiwisolver>=1.0.1 in ./venv/lib/python3.10/site-packages (from matplotlib->insightface==0.7.3->-r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 1)) (1.4.5) Requirement already satisfied: fonttools>=4.22.0 in ./venv/lib/python3.10/site-packages (from matplotlib->insightface==0.7.3->-r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 1)) (4.42.1) Requirement already satisfied: contourpy>=1.0.1 in ./venv/lib/python3.10/site-packages (from matplotlib->insightface==0.7.3->-r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 1)) (1.1.0) Requirement already satisfied: cycler>=0.10 in ./venv/lib/python3.10/site-packages (from matplotlib->insightface==0.7.3->-r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 1)) (0.11.0) Requirement already satisfied: pyparsing>=2.3.1 in ./venv/lib/python3.10/site-packages (from matplotlib->insightface==0.7.3->-r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 1)) (3.1.1) Requirement already satisfied: python-dateutil>=2.7 in ./venv/lib/python3.10/site-packages (from matplotlib->insightface==0.7.3->-r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 1)) (2.8.2) Requirement already satisfied: wcwidth in ./venv/lib/python3.10/site-packages (from prettytable->insightface==0.7.3->-r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 1)) (0.2.6) Requirement already satisfied: idna<4,>=2.5 in ./venv/lib/python3.10/site-packages (from requests->insightface==0.7.3->-r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 1)) (3.4) Requirement already satisfied: charset-normalizer<4,>=2 in ./venv/lib/python3.10/site-packages (from requests->insightface==0.7.3->-r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 1)) (3.2.0) Requirement already satisfied: urllib3<3,>=1.21.1 in ./venv/lib/python3.10/site-packages (from requests->insightface==0.7.3->-r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 1)) (1.26.16) Requirement already satisfied: certifi>=2017.4.17 in ./venv/lib/python3.10/site-packages (from requests->insightface==0.7.3->-r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 1)) (2023.7.22) Requirement already satisfied: joblib>=1.1.1 in ./venv/lib/python3.10/site-packages (from scikit-learn->insightface==0.7.3->-r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 1)) (1.3.2) Requirement already satisfied: threadpoolctl>=2.0.0 in ./venv/lib/python3.10/site-packages (from scikit-learn->insightface==0.7.3->-r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 1)) (3.2.0) Requirement already satisfied: mpmath>=0.19 in ./venv/lib/python3.10/site-packages (from sympy->onnxruntime==1.15.0->-r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 3)) (1.3.0) Requirement already satisfied: six>=1.5 in ./venv/lib/python3.10/site-packages (from python-dateutil>=2.7->matplotlib->insightface==0.7.3->-r /Users/miko/stable-diffusion-webui/extensions/sd-webui-roop/requirements.txt (line 1)) (1.16.0) Installing collected packages: opencv-python   Attempting uninstall: opencv-python     Found existing installation: opencv-python 4.8.1.78     Uninstalling opencv-python-4.8.1.78:       Successfully uninstalled opencv-python-4.8.1.78 Successfully installed opencv-python-4.7.0.72 Launching Web UI with arguments: --no-half --skip-torch-cuda-test --upcast-sampling --no-half-vae --medvram --opt-split-attention-v1 no module 'xformers'. Processing without... no module 'xformers'. Processing without... No module 'xformers'. Proceeding without it. Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled ControlNet preprocessor location: /Users/miko/stable-diffusion-webui/extensions/sd-webui-controlnet/annotator/downloads 2023-12-20 01:04:06,524 - ControlNet - INFO - ControlNet v1.1.423 2023-12-20 01:04:06,577 - ControlNet - INFO - ControlNet v1.1.423 2023-12-20 01:04:06,873 - roop - INFO - roop v0.0.2 2023-12-20 01:04:06,897 - roop - INFO - roop v0.0.2 Loading weights [ec41bd2a82] from /Users/miko/stable-diffusion-webui/models/Stable-diffusion/photon_v1.safetensors 2023-12-20 01:04:06,922 - AnimateDiff - INFO - Injecting LCM to UI. 2023-12-20 01:04:07,177 - AnimateDiff - INFO - Hacking i2i-batch. Creating model from config: /Users/miko/stable-diffusion-webui/configs/v1-inference.yaml Deforum ControlNet support: enabled Running on local URL:  http://127.0.0.1:7860   To create a public link, set share=True in launch(). [sd-webui-comfyui] Started callback listeners for process webui [sd-webui-comfyui] Starting subprocess for comfyui... Startup time: 13.0s (prepare environment: 3.7s, import torch: 2.1s, import gradio: 0.5s, setup paths: 0.6s, initialize shared: 0.2s, other imports: 3.7s, load scripts: 1.0s, create ui: 0.6s, gradio launch: 0.4s). [ComfyUI] [sd-webui-comfyui] Setting up IPC... [ComfyUI] [sd-webui-comfyui] Using inter-process communication strategy: Shared memory [ComfyUI] [sd-webui-comfyui] Started callback listeners for process comfyui [ComfyUI] [sd-webui-comfyui] Patching ComfyUI... [ComfyUI] Total VRAM 8192 MB, total RAM 8192 MB [ComfyUI] Set vram state to: SHARED [ComfyUI] Device: mps [ComfyUI] VAE dtype: torch.float32 [ComfyUI] Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention [ComfyUI] [sd-webui-comfyui] Launching ComfyUI with arguments: --listen 127.0.0.1 --port 8189 [ComfyUI] ComfyUI startup time: 2023-12-20 01:04:10.781451 [ComfyUI] Platform: Darwin [ComfyUI] Python version: 3.10.6 (v3.10.6:9c7b4bd164, Aug  1 2022, 17:13:48) [Clang 13.0.0 (clang-1300.0.29.30)] [ComfyUI] Python executable: /Users/miko/stable-diffusion-webui/venv/bin/python3.10 [ComfyUI] ** Log path: /Users/miko/stable-diffusion-webui/extensions/sd-webui-comfyui/ComfyUI/comfyui.log [ComfyUI] ### Loading: ComfyUI-Manager (V1.15) [ComfyUI] ### ComfyUI Revision: 1840 [9a7619b7] | Released on '2023-12-19' [ComfyUI] FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json [ComfyUI] FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json [ComfyUI] FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json [ComfyUI] FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json [ComfyUI] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json [ComfyUI] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json [ComfyUI] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json [ComfyUI] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json [ComfyUI] FizzleDorf Custom Nodes: Loaded [ComfyUI] Import times for custom nodes: [ComfyUI]    0.0 seconds: /Users/miko/stable-diffusion-webui/extensions/sd-webui-comfyui/comfyui_custom_nodes/webui_save_image.py [ComfyUI]    0.0 seconds: /Users/miko/stable-diffusion-webui/extensions/sd-webui-comfyui/comfyui_custom_nodes/webui_io.py [ComfyUI]    0.0 seconds: /Users/miko/stable-diffusion-webui/extensions/sd-webui-comfyui/comfyui_custom_nodes/webui_proxy_nodes.py [ComfyUI]    0.0 seconds: /Users/miko/stable-diffusion-webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved [ComfyUI]    0.1 seconds: /Users/miko/stable-diffusion-webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-VideoHelperSuite [ComfyUI]    0.2 seconds: /Users/miko/stable-diffusion-webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-Manager [ComfyUI]    0.4 seconds: /Users/miko/stable-diffusion-webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI_FizzNodes [ComfyUI] [ComfyUI] Starting server   [ComfyUI] To see the GUI go to: http://127.0.0.1:8189 Loading VAE weights specified in settings: /Users/miko/stable-diffusion-webui/models/VAE/vae-ft-ema-560000-ema-pruned.ckpt Applying attention optimization: V1... done. Model loaded in 8.6s (load weights from disk: 0.3s, create model: 0.8s, apply weights to model: 4.3s, apply float(): 1.5s, load VAE: 0.5s, calculate empty prompt: 1.0s). [ComfyUI] FETCH DATA from: /Users/miko/stable-diffusion-webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json [ComfyUI] got prompt [ComfyUI] model_type EPS [ComfyUI] adm 0 [ComfyUI] Using split attention in VAE [ComfyUI] Working with z of shape (1, 4, 32, 32) = 4096 dimensions. [ComfyUI] Using split attention in VAE [ComfyUI] missing {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'} [ComfyUI] left over keys: dict_keys(['cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids']) [ComfyUI] Prompt executed in 7.84 seconds [ComfyUI] got prompt [ComfyUI] Requested to load SD1ClipModel [ComfyUI] Loading 1 new model [ComfyUI] Requested to load SD1ClipModel [ComfyUI] Loading 1 new model [ComfyUI] unload clone 0 [ComfyUI] Requested to load BaseModel [ComfyUI] Loading 1 new model  25%|███████████                                 | 2/8 [06:57<20:51, 208.61s/it] [ComfyUI] Prompt executed in 437.41 seconds [ComfyUI] got prompt [AnimateDiffEvo] - INFO - Loading motion module mm_sd_v14.ckpt /Users/miko/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.  To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()   return self.fget.get(instance, owner)() [AnimateDiffEvo] - INFO - Regular AnimateDiff activated - latents passed in (16) less or equal to context_length None. [AnimateDiffEvo] - INFO - Using motion module mm_sd_v14.ckpt version v1. [ComfyUI] Requested to load BaseModel [ComfyUI] Requested to load AnimateDiffModel [ComfyUI] Loading 2 new models [ComfyUI] unload clone 0  75%|█████████████████████████████████           | 6/8 [10:17<03:32, 106.36s/it]Python(8493) MallocStackLogging: can't turn off malloc stack logging because it was not enabled. Python(8496) MallocStackLogging: can't turn off malloc stack logging because it was not enabled. Python(8497) MallocStackLogging: can't turn off malloc stack logging because it was not enabled. Python(8498) MallocStackLogging: can't turn off malloc stack logging because it was not enabled. Python(8499) MallocStackLogging: can't turn off malloc stack logging because it was not enabled. Python(8500) MallocStackLogging: can't turn off malloc stack logging because it was not enabled. [ComfyUI] FETCH DATA from: /Users/miko/stable-diffusion-webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json [ComfyUI] FETCH DATA from: /Users/miko/stable-diffusion-webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json [ComfyUI] FETCH DATA from: /Users/miko/stable-diffusion-webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json [ComfyUI] FETCH DATA from: /Users/miko/stable-diffusion-webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json [ComfyUI] FETCH DATA from: /Users/miko/stable-diffusion-webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json [ComfyUI] FETCH DATA from: /Users/miko/stable-diffusion-webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json [ComfyUI] FETCH DATA from: /Users/miko/stable-diffusion-webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json [ComfyUI] FETCH DATA from: /Users/miko/stable-diffusion-webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json [ComfyUI] FETCH DATA from: /Users/miko/stable-diffusion-webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json [ComfyUI] FETCH DATA from: /Users/miko/stable-diffusion-webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json [ComfyUI] FETCH DATA from: /Users/miko/stable-diffusion-webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json [ComfyUI] registered ws - sandbox_tab - d2427e95d3b4496e82e2369f556f3b1e [ComfyUI] registered ws - before_save_image_txt2img - 97ad0044efd14c42944fd9a03223ff7b [ComfyUI] registered ws - postprocess_image_txt2img - 21d4d910cef640afa917c4091951decc [ComfyUI] registered ws - preprocess_latent_img2img - 3626d8367e474e6e88ace4ed6372b7c0 [ComfyUI] registered ws - postprocess_latent_img2img - 9fb706587aac4d33b7c6cb1355406444 [ComfyUI] registered ws - postprocess_latent_txt2img - 9f5b0c91016346d2950a254e7de0441d [ComfyUI] registered ws - postprocess_image_img2img - b7b0c578bdd849adb0cbec14e952bf7a [ComfyUI] registered ws - postprocess_txt2img - c5a3307d263b4874b28296da39503e43 [ComfyUI] registered ws - postprocess_img2img - b4c3a74ed9864b45968252ec1fa6a1d6 [ComfyUI] registered ws - preprocess_img2img - 154e150535174f8bb14c3bf8590a9abc [ComfyUI] registered ws - before_save_image_img2img - 0fc4dd29f75b4df6be4829cab135f013 Python(8509) MallocStackLogging: can't turn off malloc stack logging because it was not enabled. Python(8510) MallocStackLogging: can't turn off malloc stack logging because it was not enabled. Python(8511) MallocStackLogging: can't turn off malloc stack logging because it was not enabled. Python(8512) MallocStackLogging: can't turn off malloc stack logging because it was not enabled. Python(8513) MallocStackLogging: can't turn off malloc stack logging because it was not enabled.  88%|██████████████████████████████████████▌     | 7/8 [12:32<01:55, 115.77s/it]   Screenshot 2023-12-20 at 1 34 17 AM

I have tried unlimited_area_hack=True this as well. same black image.

Im using mac M1 Memory 8GB, Sonoma 14.2 os.

Kosinkadink commented 7 months ago

With the latest AnimateDiff-Evolved update as of an hour-ish ago, v1, v2, and v3 AnimateDiff models should now work in Mac M1/M2/M3, based on some tests done with one person who owns an Apple Silicon Mac. v2 and v3 models don't require the hack at all, but v1 models will automatically trigger the unlimited area hack that should prevent black images. The underlying issue that causes the black images is somewhere inside pytorch and extremely hard to reproduce outside of ComfyUI - I could not come up with a way to reproduce it easily, so I could not report it properly to the pytorch team.