Open jokero3answer opened 2 months ago
我是手动下载的还不全吗,但是自动好像也有问题 @zhongpei ,佛佬救命
Error occurred when executing LoadImage2TextModel:
You can't pass load_in_4bit
or load_in_8bit
as a kwarg when passing quantization_config
argument at the same time.
File "D:\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "D:\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "D:\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "D:\ComfyUI\custom_nodes\Comfyui_image2prompt\src\image2text.py", line 54, in get_model return (Llama3vModel(device=device,low_memory=low_memory),) File "D:\ComfyUI\custom_nodes\Comfyui_image2prompt\src\llama3_model.py", line 67, in init self.model = AutoModelForCausalLM.from_pretrained( File "D:\ComfyUI\venv\lib\site-packages\transformers\models\auto\auto_factory.py", line 556, in from_pretrained return model_class.from_pretrained( File "D:\ComfyUI\venv\lib\site-packages\transformers\modeling_utils.py", line 2952, in from_pretrained raise ValueError(
模型ok了,现在报错这个
Same, seems this error is from the wrong version of transformers
Clicked execute and it never responded
low_memory And so it is.
![image](https://github.com/zhongpei/Comfyui_image2prompt/assets/130854108/c6a6c2ee-4e44-4603-8c6f-2eef4fedecb7)
Diagnostics-1714320959.log