Error on a 4090 while using fp16 safetensor, I receive OOM and/or black frames.. Any ideas?
black frames:
got prompt
'🔥 - 4 Nodes not included in prompt but is activated'
LatentVisualDiffusion: Running in v-prediction mode
AE working on z of shape (1, 4, 32, 32) = 4096 dimensions.
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
making attention of type 'memory-efficient-cross-attn-fusion' with 512 in_channels
making attention of type 'memory-efficient-cross-attn-fusion' with 512 in_channels
Loaded ViT-H-14 model config.
Loading pretrained ViT-H-14 weights (laion2b_s32b_b79k).
Loaded ViT-H-14 model config.
Loading pretrained ViT-H-14 weights (laion2b_s32b_b79k).
D:\COMFYUI_BETA\ComfyUI\custom_nodes\ComfyUI-ToonCrafter\ToonCrafter\checkpoints\tooncrafter_512_interp_v1\model_512_interp-fp16.ckpt
model checkpoint loaded.
Global seed set to 129
start: a anime blinking 2024-06-02 13:29:38
!!! Exception during processing!!! Allocation on device
Traceback (most recent call last):
File "D:\COMFYUI_BETA\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\COMFYUI_BETA\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\COMFYUI_BETA\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\COMFYUI_BETA\ComfyUI\custom_nodes\ComfyUI-ToonCrafter__init__.py", line 176, in get_image
batch_samples = batch_ddim_sampling(model, cond, noise_shape, n_samples=1, ddim_steps=steps, ddim_eta=eta, cfg_scale=cfg_scale, hs=hs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\COMFYUI_BETA/ComfyUI/custom_nodes/ComfyUI-ToonCrafter\ToonCrafter\scripts\evaluation\funcs.py", line 79, in batch_ddim_sampling
batch_images = model.decode_first_stage(samples, additional_decode_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\COMFYUI_BETA\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(args, kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:\COMFYUI_BETA/ComfyUI/custom_nodes/ComfyUI-ToonCrafter/ToonCrafter\lvdm\models\ddpm3d.py", line 683, in decode_first_stage
return self.decode_core(z, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\COMFYUI_BETA/ComfyUI/custom_nodes/ComfyUI-ToonCrafter/ToonCrafter\lvdm\models\ddpm3d.py", line 671, in decode_core
out = self.first_stage_model.decode(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\COMFYUI_BETA/ComfyUI/custom_nodes/ComfyUI-ToonCrafter/ToonCrafter\lvdm\models\autoencoder.py", line 119, in decode
dec = self.decoder(z, kwargs) # change for SVD decoder by adding kwargs
^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\COMFYUI_BETA\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\COMFYUI_BETA\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\COMFYUI_BETA/ComfyUI/custom_nodes/ComfyUI-ToonCrafter/ToonCrafter\lvdm\models\autoencoder_dualref.py", line 584, in forward
h = self.up[i_level].block[i_block](h, temb, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\COMFYUI_BETA\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\COMFYUI_BETA\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, *kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\COMFYUI_BETA/ComfyUI/custom_nodes/ComfyUI-ToonCrafter/ToonCrafter\lvdm\models\autoencoder_dualref.py", line 975, in forward
x = super().forward(x, temb)
^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\COMFYUI_BETA/ComfyUI/custom_nodes/ComfyUI-ToonCrafter/ToonCrafter\lvdm\models\autoencoder_dualref.py", line 74, in forward
h = nonlinearity(h)
^^^^^^^^^^^^^^^
File "D:\COMFYUI_BETA/ComfyUI/custom_nodes/ComfyUI-ToonCrafter/ToonCrafter\lvdm\models\autoencoder_dualref.py", line 25, in nonlinearity
return x torch.sigmoid(x)
^~~~
torch.cuda.OutOfMemoryError: Allocation on device
Error on a 4090 while using fp16 safetensor, I receive OOM and/or black frames.. Any ideas?
black frames:
got prompt '🔥 - 4 Nodes not included in prompt but is activated' LatentVisualDiffusion: Running in v-prediction mode AE working on z of shape (1, 4, 32, 32) = 4096 dimensions. Working with z of shape (1, 4, 32, 32) = 4096 dimensions. making attention of type 'vanilla-xformers' with 512 in_channels building MemoryEfficientAttnBlock with 512 in_channels... making attention of type 'memory-efficient-cross-attn-fusion' with 512 in_channels making attention of type 'memory-efficient-cross-attn-fusion' with 512 in_channels Loaded ViT-H-14 model config. Loading pretrained ViT-H-14 weights (laion2b_s32b_b79k). Loaded ViT-H-14 model config. Loading pretrained ViT-H-14 weights (laion2b_s32b_b79k). D:\COMFYUI_BETA\ComfyUI\custom_nodes\ComfyUI-ToonCrafter\ToonCrafter\checkpoints\tooncrafter_512_interp_v1\model_512_interp-fp16.ckpt
Prompt executed in 24.62 seconds