Closed bananasss00 closed 2 weeks ago
I came here to request the same. This looks like a good leap forward for users.
Aaaw I came here for the same reason
https://github.com/comfyanonymous/ComfyUI_bitsandbytes_NF4
Let me know if it works.
It works correctly, thanks
Error occurred when executing SamplerCustomAdvanced: 'ForgeParams4bit' object has no attribute 'quant_storage'
https://github.com/comfyanonymous/ComfyUI_bitsandbytes_NF4
Let me know if it works.
Thank you so much
执行 SamplerCustomAdvanced 时发生错误: “ForgeParams4bit”对象没有属性“quant_storage”
After pip install -U bitsandbytes, it starts to work properly, thank you very much indeed for your work
https://github.com/comfyanonymous/ComfyUI_bitsandbytes_NF4
Let me know if it works.
Works until you try to change the prompt. After that you get out of memory error. (8VRAM)
https://github.com/comfyanonymous/ComfyUI_bitsandbytes_NF4
Let me know if it works.
It works thanks! I couldn't get lora to work with it.
https://github.com/comfyanonymous/ComfyUI_bitsandbytes_NF4
Let me know if it works.
(removed this post, I created my own problem)
Getting this error, have bitsandbytes installed. Error occurred when executing SamplerCustomAdvanced:
'ForgeParams4bit' object has no attribute 'quant_storage'
File "K:\Graphics\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\Graphics\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\Graphics\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\Graphics\ComfyUI_windows_portable\ComfyUI\comfy_extras\nodes_custom_sampler.py", line 612, in sample samples = guider.sample(noise.generate_noise(latent), latent_image, sampler, sigmas, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise.seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\Graphics\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 706, in sample self.inner_model, self.conds, self.loaded_models = comfy.sampler_helpers.prepare_sampling(self.model_patcher, noise.shape, self.conds) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\Graphics\ComfyUI_windows_portable\ComfyUI\comfy\sampler_helpers.py", line 66, in prepare_sampling comfy.model_management.load_models_gpu([model] + models, memory_required=memory_required, minimum_memory_required=minimum_memory_required) File "K:\Graphics\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 526, in load_models_gpu cur_loaded_model = loaded_model.model_load(lowvram_model_memory, force_patch_weights=force_patch_weights) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\Graphics\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 323, in model_load self.model.unpatch_model(self.model.offload_device) File "K:\Graphics\ComfyUI_windows_portable\ComfyUI\comfy\model_patcher.py", line 614, in unpatch_model self.model.to(device_to) File "K:\Graphics\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1152, in to return self._apply(convert) ^^^^^^^^^^^^^^^^^^^^ File "K:\Graphics\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 802, in _apply module._apply(fn) File "K:\Graphics\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 802, in _apply module._apply(fn) File "K:\Graphics\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 825, in _apply param_applied = fn(param) ^^^^^^^^^ File "K:\Graphics\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1150, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\Graphics\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_bitsandbytes_NF4__init__.py", line 64, in to quant_storage=self.quant_storage, ^^^^^^^^^^^^^^^^^^
Getting this error, have bitsandbytes installed. Error occurred when executing SamplerCustomAdvanced:
'ForgeParams4bit' object has no attribute 'quant_storage'
File "K:\Graphics\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\Graphics\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\Graphics\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\Graphics\ComfyUI_windows_portable\ComfyUI\comfy_extras\nodes_custom_sampler.py", line 612, in sample samples = guider.sample(noise.generate_noise(latent), latent_image, sampler, sigmas, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise.seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\Graphics\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 706, in sample self.inner_model, self.conds, self.loaded_models = comfy.sampler_helpers.prepare_sampling(self.model_patcher, noise.shape, self.conds) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\Graphics\ComfyUI_windows_portable\ComfyUI\comfy\sampler_helpers.py", line 66, in prepare_sampling comfy.model_management.load_models_gpu([model] + models, memory_required=memory_required, minimum_memory_required=minimum_memory_required) File "K:\Graphics\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 526, in load_models_gpu cur_loaded_model = loaded_model.model_load(lowvram_model_memory, force_patch_weights=force_patch_weights) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\Graphics\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 323, in model_load self.model.unpatch_model(self.model.offload_device) File "K:\Graphics\ComfyUI_windows_portable\ComfyUI\comfy\model_patcher.py", line 614, in unpatch_model self.model.to(device_to) File "K:\Graphics\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1152, in to return self._apply(convert) ^^^^^^^^^^^^^^^^^^^^ File "K:\Graphics\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 802, in _apply module._apply(fn) File "K:\Graphics\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 802, in _apply module._apply(fn) File "K:\Graphics\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 825, in _apply param_applied = fn(param) ^^^^^^^^^ File "K:\Graphics\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1150, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\Graphics\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_bitsandbytes_NF4init.py", line 64, in to quant_storage=self.quant_storage, ^^^^^^^^^^^^^^^^^^
I have the same issue, after re-install bitsandbytes, it fixed the issue for me
I have the same issue, after re-install bitsandbytes, it fixed the issue for me
That fixed it, it's working now. I had an old version of bitsandbytes installed, the reinstall command for some reason would also try to upgrade torch to 2.4, which I don't want. Just deleted the 'bitsandbytes' folder and pip installed it and it works fine now.
Works until you try to change the prompt. After that you get out of memory error. (8VRAM)
That should be fixed now if you update the node.
Ahhhh i think i know why its borked for me, i'm on mac and seems latest verison of bitsandbytes is 0.42, 0.43 isn't released for mac :S
I have the same problem and I'm on Mac Silicon ...
Loras do not work with NF4. I tested on Boring Realism Flux Lora (400 steps): https://www.reddit.com/r/StableDiffusion/comments/1eq5400/lora_training_progress_on_improving_scene/ ComfyUI workflow: https://files.catbox.moe/bpcw81.png Lora files: https://huggingface.co/kudzueye/Boreal
SamplerCustomAdvanced mode in ComfyUI returns this error:
Requested to load FluxClipModel_ Loading 1 new model Requested to load Flux Loading 1 new model !!! Exception during processing!!! .to() does not accept copy argument Traceback (most recent call last): File "F:\ComfyUI\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "F:\ComfyUI\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "F:\ComfyUI\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "F:\ComfyUI\ComfyUI\comfy_extras\nodes_custom_sampler.py", line 612, in sample samples = guider.sample(noise.generate_noise(latent), latent_image, sampler, sigmas, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise.seed) File "F:\ComfyUI\ComfyUI\comfy\samplers.py", line 706, in sample self.inner_model, self.conds, self.loaded_models = comfy.sampler_helpers.prepare_sampling(self.model_patcher, noise.shape, self.conds) File "F:\ComfyUI\ComfyUI\comfy\sampler_helpers.py", line 66, in prepare_sampling comfy.model_management.load_models_gpu([model] + models, memory_required=memory_required, minimum_memory_required=minimum_memory_required) File "F:\ComfyUI\ComfyUI\comfy\model_management.py", line 526, in load_models_gpu cur_loaded_model = loaded_model.model_load(lowvram_model_memory, force_patch_weights=force_patch_weights) File "F:\ComfyUI\ComfyUI\comfy\model_management.py", line 325, in model_load raise e File "F:\ComfyUI\ComfyUI\comfy\model_management.py", line 321, in model_load self.real_model = self.model.patch_model(device_to=patch_model_to, patch_weights=load_weights) File "F:\ComfyUI\ComfyUI\comfy\model_patcher.py", line 349, in patch_model self.patch_weight_to_device(key, device_to) File "F:\ComfyUI\ComfyUI\comfy\model_patcher.py", line 324, in patch_weight_to_device self.backup[key] = collections.namedtuple('Dimension', ['weight', 'inplace_update'])(weight.to(device=self.offload_device, copy=inplace_update), inplace_update) File "F:\ComfyUI\ComfyUI\custom_nodes\ComfyUI_bitsandbytes_NF4\__init__.py", line 53, in to device, dtype, non_blocking, convert_to_format = torch._C._nn._parse_to(*args, **kwargs) RuntimeError: .to() does not accept copy argument
Loras do not work with NF4. I tested on Boring Realism Flux Lora (400 steps): https://www.reddit.com/r/StableDiffusion/comments/1eq5400/lora_training_progress_on_improving_scene/ ComfyUI workflow: https://files.catbox.moe/bpcw81.png Lora files: https://huggingface.co/kudzueye/Boreal
SamplerCustomAdvanced mode in ComfyUI returns this error:
Requested to load FluxClipModel_ Loading 1 new model Requested to load Flux Loading 1 new model !!! Exception during processing!!! .to() does not accept copy argument Traceback (most recent call last): File "F:\ComfyUI\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "F:\ComfyUI\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "F:\ComfyUI\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "F:\ComfyUI\ComfyUI\comfy_extras\nodes_custom_sampler.py", line 612, in sample samples = guider.sample(noise.generate_noise(latent), latent_image, sampler, sigmas, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise.seed) File "F:\ComfyUI\ComfyUI\comfy\samplers.py", line 706, in sample self.inner_model, self.conds, self.loaded_models = comfy.sampler_helpers.prepare_sampling(self.model_patcher, noise.shape, self.conds) File "F:\ComfyUI\ComfyUI\comfy\sampler_helpers.py", line 66, in prepare_sampling comfy.model_management.load_models_gpu([model] + models, memory_required=memory_required, minimum_memory_required=minimum_memory_required) File "F:\ComfyUI\ComfyUI\comfy\model_management.py", line 526, in load_models_gpu cur_loaded_model = loaded_model.model_load(lowvram_model_memory, force_patch_weights=force_patch_weights) File "F:\ComfyUI\ComfyUI\comfy\model_management.py", line 325, in model_load raise e File "F:\ComfyUI\ComfyUI\comfy\model_management.py", line 321, in model_load self.real_model = self.model.patch_model(device_to=patch_model_to, patch_weights=load_weights) File "F:\ComfyUI\ComfyUI\comfy\model_patcher.py", line 349, in patch_model self.patch_weight_to_device(key, device_to) File "F:\ComfyUI\ComfyUI\comfy\model_patcher.py", line 324, in patch_weight_to_device self.backup[key] = collections.namedtuple('Dimension', ['weight', 'inplace_update'])(weight.to(device=self.offload_device, copy=inplace_update), inplace_update) File "F:\ComfyUI\ComfyUI\custom_nodes\ComfyUI_bitsandbytes_NF4\__init__.py", line 53, in to device, dtype, non_blocking, convert_to_format = torch._C._nn._parse_to(*args, **kwargs) RuntimeError: .to() does not accept copy argument
The current implemented nf4 support is a simple copy of the Forge functionality and does not support LoRA.
I have the same problem and I'm on Mac Silicon ...
Currently, it doesn’t work on Macs with M1/M2/M3 Silicon chips because the bitsandbytes module on Mac only goes up to version 0.42, which generates the following error: 'UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable. warn("The installed version of bitsandbytes was compiled without GPU support. " 'NoneType' object has no attribute 'cadam32bit_grad_fp32'.
This issue has been fixed in later versions of bitsandbytes, but those are only available for Windows and Linux with CUDA. I hope they find a solution for Mac Silicon soon.
https://github.com/comfyanonymous/ComfyUI_bitsandbytes_NF4
Let me know if it works.
Works fine for me:
https://github.com/comfyanonymous/ComfyUI_bitsandbytes_NF4 Let me know if it works.
Works fine for me:
On Mac Silicon?
https://github.com/comfyanonymous/ComfyUI_bitsandbytes_NF4 Let me know if it works.
Works fine for me: MoonRide FLUX.1-dev-bnb workflow v1.json
On Mac Silicon?
Windows, RTX 4080 - about 22 sec per image. Peak VRAM usage ~14 GB (can be reduced to ~12 GB using tiled VAE decode node).
Ahhhh, yeah Normal!! Still waiting for Mac Silicon Updates🥲🥲
Hardware: CPU Intel i7 6700 Ram 32 GB GTX 1660Ti 6 GB
I keep having OOM errors, more specifically
!!! Exception during processing!!! Allocation on device Traceback (most recent call last): File "/home/td/Coding/ComfyUI/execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/td/Coding/ComfyUI/execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/td/Coding/ComfyUI/execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/td/Coding/ComfyUI/comfy_extras/nodes_custom_sampler.py", line 612, in sample samples = guider.sample(noise.generate_noise(latent), latent_image, sampler, sigmas, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise.seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/td/Coding/ComfyUI/comfy/samplers.py", line 706, in sample self.inner_model, self.conds, self.loaded_models = comfy.sampler_helpers.prepare_sampling(self.model_patcher, noise.shape, self.conds) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/td/Coding/ComfyUI/comfy/sampler_helpers.py", line 66, in prepare_sampling comfy.model_management.load_models_gpu([model] + models, memory_required=memory_required, minimum_memory_required=minimum_memory_required) File "/home/td/Coding/ComfyUI/comfy/model_management.py", line 527, in load_models_gpu cur_loaded_model = loaded_model.model_load(lowvram_model_memory, force_patch_weights=force_patch_weights) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/td/Coding/ComfyUI/comfy/model_management.py", line 325, in model_load raise e File "/home/td/Coding/ComfyUI/comfy/model_management.py", line 319, in model_load self.real_model = self.model.patch_model_lowvram(device_to=patch_model_to, lowvram_model_memory=lowvram_model_memory, force_patch_weights=force_patch_weights) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/td/Coding/ComfyUI/comfy/model_patcher.py", line 426, in patch_model_lowvram self.lowvram_load(device_to, lowvram_model_memory=lowvram_model_memory, force_patch_weights=force_patch_weights) File "/home/td/Coding/ComfyUI/comfy/model_patcher.py", line 410, in lowvram_load m.to(device_to) File "/home/td/Coding/ComfyUI/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1174, in to return self._apply(convert) ^^^^^^^^^^^^^^^^^^^^ File "/home/td/Coding/ComfyUI/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 805, in _apply param_applied = fn(param) ^^^^^^^^^ File "/home/td/Coding/ComfyUI/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1160, in convert return t.to( ^^^^^ File "/home/td/Coding/ComfyUI/custom_nodes/ComfyUI_bitsandbytes_NF4/init.py", line 58, in to torch.nn.Parameter.to(self, device=device, dtype=dtype, non_blocking=non_blocking), ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ torch.OutOfMemoryError: Allocation on device
Does this custom node somehow not work with the smart memory system that splits models between VRAM and RAM?
There is also GGUF support now. But is also does not support Lora or ControlNet
https://github.com/city96/ComfyUI-GGUF https://www.reddit.com/r/StableDiffusion/comments/1eslcg0/excuse_me_gguf_quants_are_possible_on_flux_now/
forge now supports lora for NF4
现在也有 GGUF 支持。 但它也不支持 Lora 或 ControlNet
https://github.com/city96/ComfyUI-GGUF https://www.reddit.com/r/StableDiffusion/comments/1eslcg0/excuse_me_gguf_quants_are_possible_on_flux_now/
forge now supports lora for NF4
NF4 seems broken now, eg. not possible to make it work no matter what.
Loras do not work with NF4. I tested on Boring Realism Flux Lora (400 steps): https://www.reddit.com/r/StableDiffusion/comments/1eq5400/lora_training_progress_on_improving_scene/ ComfyUI workflow: https://files.catbox.moe/bpcw81.png Lora files: https://huggingface.co/kudzueye/Boreal SamplerCustomAdvanced mode in ComfyUI returns this error:
Requested to load FluxClipModel_ Loading 1 new model Requested to load Flux Loading 1 new model !!! Exception during processing!!! .to() does not accept copy argument Traceback (most recent call last): File "F:\ComfyUI\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "F:\ComfyUI\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "F:\ComfyUI\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "F:\ComfyUI\ComfyUI\comfy_extras\nodes_custom_sampler.py", line 612, in sample samples = guider.sample(noise.generate_noise(latent), latent_image, sampler, sigmas, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise.seed) File "F:\ComfyUI\ComfyUI\comfy\samplers.py", line 706, in sample self.inner_model, self.conds, self.loaded_models = comfy.sampler_helpers.prepare_sampling(self.model_patcher, noise.shape, self.conds) File "F:\ComfyUI\ComfyUI\comfy\sampler_helpers.py", line 66, in prepare_sampling comfy.model_management.load_models_gpu([model] + models, memory_required=memory_required, minimum_memory_required=minimum_memory_required) File "F:\ComfyUI\ComfyUI\comfy\model_management.py", line 526, in load_models_gpu cur_loaded_model = loaded_model.model_load(lowvram_model_memory, force_patch_weights=force_patch_weights) File "F:\ComfyUI\ComfyUI\comfy\model_management.py", line 325, in model_load raise e File "F:\ComfyUI\ComfyUI\comfy\model_management.py", line 321, in model_load self.real_model = self.model.patch_model(device_to=patch_model_to, patch_weights=load_weights) File "F:\ComfyUI\ComfyUI\comfy\model_patcher.py", line 349, in patch_model self.patch_weight_to_device(key, device_to) File "F:\ComfyUI\ComfyUI\comfy\model_patcher.py", line 324, in patch_weight_to_device self.backup[key] = collections.namedtuple('Dimension', ['weight', 'inplace_update'])(weight.to(device=self.offload_device, copy=inplace_update), inplace_update) File "F:\ComfyUI\ComfyUI\custom_nodes\ComfyUI_bitsandbytes_NF4\__init__.py", line 53, in to device, dtype, non_blocking, convert_to_format = torch._C._nn._parse_to(*args, **kwargs) RuntimeError: .to() does not accept copy argument
The current implemented nf4 support is a simple copy of the Forge functionality and does not support LoRA.
Hello, any update on that please? Is there a possibility to make lora work with it?
Use this custom nodes for NF4. https://github.com/silveroxides/ComfyUI_bnb_nf4_fp4_Loaders
original one is deprecated.
Lora only work with gguf though.
Feature Idea
reference https://github.com/lllyasviel/stable-diffusion-webui-forge/discussions/981
Existing Solutions
No response
Other
No response