kijai / ComfyUI-ELLA-wrapper

Simple wrapper to try out ELLA in ComfyUI using diffusers
Apache License 2.0
109 stars 7 forks source link

Update your code to use the model in this location #8

Closed kakachiex2 closed 6 months ago

kakachiex2 commented 6 months ago

I'm using your repo in this link with exponentialML node, and it works but your node don't work with this repo I get this error.

https://huggingface.co/Kijai/flan-t5-xl-encoder-only-bf16/tree/main

Error occurred when executing ella_t5_embeds:

Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory K:\ComfyUI\ComfyUI\models\t5_model\flan-t5-xl-encoder-only-bf16.

File "K:\ComfyUI\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-ELLA-wrapper\nodes.py", line 335, in process t5_encoder = T5TextEmbedder(pretrained_path=t5_path).to(device, dtype=dtype) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-ELLA-wrapper\model.py", line 133, in init self.model = T5EncoderModel.from_pretrained(pretrained_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI\venv\Lib\site-packages\transformers\modeling_utils.py", line 3144, in from_pretrained raise EnvironmentError(

kakachiex2 commented 6 months ago

Ok know it works but I get this new error

Error occurred when executing ella_sampler:

Allocation on device 0 would exceed allowed memory. (out of memory) Currently allocated : 3.27 GiB Requested : 2.25 GiB Device limit : 6.00 GiB Free (according to CUDA): 0 bytes PyTorch limit (set by user-supplied memory fraction) : 17179869184.00 GiB

File "K:\ComfyUI\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-ELLA-wrapper\nodes.py", line 281, in process images = pipe( ^^^^^ File "K:\ComfyUI\ComfyUI\venv\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI\venv\Lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 1011, in call image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False, generator=generator)[ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI\venv\Lib\site-packages\diffusers\utils\accelerate_utils.py", line 46, in wrapper return method(self, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI\venv\Lib\site-packages\diffusers\models\autoencoders\autoencoder_kl.py", line 304, in decode decoded = self._decode(z).sample ^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI\venv\Lib\site-packages\diffusers\models\autoencoders\autoencoder_kl.py", line 275, in _decode dec = self.decoder(z) ^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl return forward_call(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI\venv\Lib\site-packages\diffusers\models\autoencoders\vae.py", line 338, in forward sample = up_block(sample, latent_embeds) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl return self._call_impl(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl return forward_call(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI\venv\Lib\site-packages\diffusers\models\unets\unet_2d_blocks.py", line 2737, in forward hidden_states = resnet(hidden_states, temb=temb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI\venv\Lib\site-packages\diffusers\models\resnet.py", line 332, in forward hidden_states = self.norm1(hidden_states) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl return forward_call(args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI\venv\Lib\site-packages\torch\nn\modules\normalization.py", line 287, in forward return F.group_norm( ^^^^^^^^^^^^^ File "K:\ComfyUI\ComfyUI\venv\Lib\site-packages\torch\nn\functional.py", line 2561, in group_norm return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

kijai commented 6 months ago

This repo is using the original diffusers code for sampling and it's not quite as optimized as Comfy sampling, I don't know if it can be made fit into 6GB. I added some offloading now that may help, if not I'll look into optimizing it later.

Erwin11 commented 6 months ago

@kijai

GPU 6GB vram works.

just download models, put ella-sd1.5-tsc-t5xl.safetensors in "ComfyUI_windows_portable\ComfyUI\models\ella\ella-sd1.5-tsc-t5xl.safetensors" put 9 files in "ComfyUI_windows_portable\ComfyUI\models\t5_model\flan-t5-xl-encoder-only-bf16\" 1

2

3