pzc163 / Comfyui-HunyuanDiT

a comfyui node for running HunyuanDIT model
13 stars 2 forks source link

4060 8G爆显存 这个hunyuan大模型经过comfyui优化8G显存都还是不能运行吗 #3

Open lixida123 opened 3 months ago

lixida123 commented 3 months ago

4060 8G爆显存 这个hunyuan大模型经过comfyui优化8G显存都还是不能运行吗 Error occurred when executing DiffusersCLIPLoader:

Allocation on device 0 would exceed allowed memory. (out of memory) Currently allocated : 7.22 GiB Requested : 183.67 MiB Device limit : 8.00 GiB Free (according to CUDA): 0 bytes PyTorch limit (set by user-supplied memory fraction) : 17179869184.00 GiB

File "D:\comfyui\ComfyUI-aki-v1.3\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "D:\comfyui\ComfyUI-aki-v1.3\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "D:\comfyui\ComfyUI-aki-v1.3\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(*slice_dict(input_data_all, i))) File "D:\comfyui\ComfyUI-aki-v1.3\custom_nodes\Comfyui-HunyuanDiT\nodes.py", line 163, in load_clip out = CLIP(False, root, CLIP_PATH, t5_file) File "D:\comfyui\ComfyUI-aki-v1.3\custom_nodes\Comfyui-HunyuanDiT\clip.py", line 41, in init clip_text_encoder.eval().to(self.device) File "D:\comfyui\ComfyUI-aki-v1.3\python\lib\site-packages\transformers\modeling_utils.py", line 2556, in to return super().to(args, **kwargs) File "D:\comfyui\ComfyUI-aki-v1.3\python\lib\site-packages\torch\nn\modules\module.py", line 1145, in to return self._apply(convert) File "D:\comfyui\ComfyUI-aki-v1.3\python\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply module._apply(fn) File "D:\comfyui\ComfyUI-aki-v1.3\python\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply module._apply(fn) File "D:\comfyui\ComfyUI-aki-v1.3\python\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply param_applied = fn(param) File "D:\comfyui\ComfyUI-aki-v1.3\python\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)

pzc163 commented 3 months ago

8G只能使用量化后的混元模型和mT5模型了