comfyanonymous / ComfyUI_bitsandbytes_NF4

GNU Affero General Public License v3.0
278 stars 19 forks source link

how to support flux lora, no work. #18

Open xueqing0622 opened 1 month ago

xueqing0622 commented 1 month ago

how to support flux lora, no work. It is there easy way to support normal flux lora to nf4 "Error occurred when executing SamplerCustomAdvanced:

.to() does not accept copy argument

File "F:\ComfyUI\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(*slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\ComfyUI\comfy_extras\nodes_custom_sampler.py", line 612, in sample samples = guider.sample(noise.generate_noise(latent), latent_image, sampler, sigmas, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise.seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\ComfyUI\comfy\samplers.py", line 706, in sample self.inner_model, self.conds, self.loaded_models = comfy.sampler_helpers.prepare_sampling(self.model_patcher, noise.shape, self.conds) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\ComfyUI\comfy\sampler_helpers.py", line 66, in prepare_sampling comfy.model_management.load_models_gpu([model] + models, memory_required=memory_required, minimum_memory_required=minimum_memory_required) File "F:\ComfyUI\ComfyUI\comfy\model_management.py", line 526, in load_models_gpu cur_loaded_model = loaded_model.model_load(lowvram_model_memory, force_patch_weights=force_patch_weights) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\ComfyUI\comfy\model_management.py", line 325, in model_load raise e File "F:\ComfyUI\ComfyUI\comfy\model_management.py", line 321, in model_load self.real_model = self.model.patch_model(device_to=patch_model_to, patch_weights=load_weights) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\ComfyUI\comfy\model_patcher.py", line 349, in patch_model self.patch_weight_to_device(key, device_to) File "F:\ComfyUI\ComfyUI\comfy\model_patcher.py", line 324, in patch_weight_to_device self.backup[key] = collections.namedtuple('Dimension', ['weight', 'inplace_update'])(weight.to(device=self.offload_device, copy=inplace_update), inplace_update) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\ComfyUI\custom_nodes\ComfyUI_bitsandbytes_NF4init.py", line 53, in to device, dtype, non_blocking, convert_to_format = torch._C._nn._parse_to(args, **kwargs)

Danamir commented 1 month ago

FYI Forge had a new commit yesterday to test an experimental way to load LoRA with NF4 : https://github.com/lllyasviel/stable-diffusion-webui-forge/commit/cb889470ba33722a89c3f625f972a795504abdc6

Could this node be updated to do this, maybe by adding a new node to load LoRA in combination with CheckpointLoaderNF4 ?

[edit] : After some tests, the forge method seems far from perfect, the LoRA has a much lower effect on NF4 than on the FP8 version. For reference, the current discussion on Forge : https://github.com/lllyasviel/stable-diffusion-webui-forge/issues/1001