AIGODLIKE / ComfyUI-ToonCrafter

This project is used to enable ToonCrafter to be used in ComfyUI.
Apache License 2.0
332 stars 11 forks source link

4090 using 512 fp16 weights is still out of memory. #13

Open LianTianNo1 opened 5 months ago

LianTianNo1 commented 5 months ago

Error occurred when executing ToonCrafterNode:

Allocation on device 0 would exceed allowed memory. (out of memory) Currently allocated : 22.16 GiB Requested : 1.25 GiB Device limit : 23.99 GiB Free (according to CUDA): 0 bytes PyTorch limit (set by user-supplied memory fraction) : 17179869184.00 GiB

File "E:\soft\ComfyUI-aki-v1.2\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "E:\soft\ComfyUI-aki-v1.2\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "E:\soft\ComfyUI-aki-v1.2\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) File "E:\soft\ComfyUI-aki-v1.2\custom_nodes\ComfyUI-ToonCrafter__init__.py", line 161, in get_image z, hs = self.get_latent_z_with_hidden_states(model, videos) File "E:\soft\ComfyUI-aki-v1.2\custom_nodes\ComfyUI-ToonCrafter__init__.py", line 218, in get_latent_z_with_hidden_states encoder_posterior, hidden_states = model.first_stage_model.encode(x, return_hidden_states=True) File "E:\soft/ComfyUI-aki-v1.2/custom_nodes/ComfyUI-ToonCrafter/ToonCrafter\lvdm\models\autoencoder.py", line 106, in encode h, hidden = self.encoder(x, return_hidden_states) File "E:\soft\ComfyUI-aki-v1.2\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "E:\soft\ComfyUI-aki-v1.2\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(args, kwargs) File "E:\soft/ComfyUI-aki-v1.2/custom_nodes/ComfyUI-ToonCrafter/ToonCrafter\lvdm\modules\networks\ae_modules.py", line 454, in forward h = self.down[i_level].block[i_block](hs[-1], temb) File "E:\soft\ComfyUI-aki-v1.2\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "E:\soft\ComfyUI-aki-v1.2\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(args, *kwargs) File "E:\soft/ComfyUI-aki-v1.2/custom_nodes/ComfyUI-ToonCrafter/ToonCrafter\lvdm\modules\networks\ae_modules.py", line 200, in forward h = nonlinearity(h) File "E:\soft/ComfyUI-aki-v1.2/custom_nodes/ComfyUI-ToonCrafter/ToonCrafter\lvdm\modules\networks\ae_modules.py", line 15, in nonlinearity return x torch.sigmoid(x)

donatienLef commented 5 months ago

Same, any fix or idea ?

Yorha4D commented 5 months ago

@LianTianNo1 @donatienLef There's been some discussion here about hardcoding half precision in ToonCrafter's code to make it take <24GB VRAM.

KarryCharon commented 5 months ago

@donatienLef @LianTianNo1 We impl the low vram version. Try the latest branch for testing.