I downloaded the +21GB of all the models and upon running the nodes I got this error. Any idea how to solve it? Is there a way to use a GGUF Llama 3 instead of the 17GB model?
Loading CLIP
Warning torch.load doesn't support weights_only on this pytorch version, loading unsafely.
Fetching 12 files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████| 12/12 [00:00<00:00, 756.06it/s]
Loading tokenizer and text model
We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set max_memory in to a higher value to use more memory (at your own risk).
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:01<00:00, 2.03it/s]
Some parameters are on the meta device because they were offloaded to the cpu.
We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set max_memory in to a higher value to use more memory (at your own risk).
You shouldn't move a model that is dispatched using accelerate hooks.
!!! Exception during processing !!! You can't move a model that has some modules offloaded to cpu or disk.
Traceback (most recent call last):
File "L:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 289, in execute
obj = class_def()
^^^^^^^^^^^
File "L:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\Comfyui_joy-caption-alpha-two\joy_captioner_alpha_two.py", line 189, in init
self.text_model.load_adapter(os.path.join(CHECKPOINT_PATH, "text_model"))
File "L:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\integrations\peft.py", line 230, in load_adapter
self._dispatch_accelerate_model(
File "L:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\integrations\peft.py", line 477, in _dispatch_accelerate_model
dispatch_model(
File "L:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\accelerate\big_modeling.py", line 494, in dispatch_model
model.to(device)
File "L:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\accelerate\big_modeling.py", line 456, in wrapper
raise RuntimeError("You can't move a model that has some modules offloaded to cpu or disk.")
RuntimeError: You can't move a model that has some modules offloaded to cpu or disk.
I downloaded the +21GB of all the models and upon running the nodes I got this error. Any idea how to solve it? Is there a way to use a GGUF Llama 3 instead of the 17GB model?
Loading CLIP Warning torch.load doesn't support weights_only on this pytorch version, loading unsafely. Fetching 12 files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████| 12/12 [00:00<00:00, 756.06it/s] Loading tokenizer and text model We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set
max_memory
in to a higher value to use more memory (at your own risk). Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:01<00:00, 2.03it/s] Some parameters are on the meta device because they were offloaded to the cpu. We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can setmax_memory
in to a higher value to use more memory (at your own risk). You shouldn't move a model that is dispatched using accelerate hooks. !!! Exception during processing !!! You can't move a model that has some modules offloaded to cpu or disk. Traceback (most recent call last): File "L:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 289, in execute obj = class_def() ^^^^^^^^^^^ File "L:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\Comfyui_joy-caption-alpha-two\joy_captioner_alpha_two.py", line 189, in init self.text_model.load_adapter(os.path.join(CHECKPOINT_PATH, "text_model")) File "L:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\integrations\peft.py", line 230, in load_adapter self._dispatch_accelerate_model( File "L:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\integrations\peft.py", line 477, in _dispatch_accelerate_model dispatch_model( File "L:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\accelerate\big_modeling.py", line 494, in dispatch_model model.to(device) File "L:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\accelerate\big_modeling.py", line 456, in wrapper raise RuntimeError("You can't move a model that has some modules offloaded to cpu or disk.") RuntimeError: You can't move a model that has some modules offloaded to cpu or disk.