zhongpei / Comfyui_image2prompt

image to prompt by vikhyatk/moondream1
GNU General Public License v3.0
287 stars 19 forks source link

the model internlm-xcomposer2-vl-7b is stopped working #12

Closed BenderBlender closed 7 months ago

BenderBlender commented 8 months ago

image

Error occurred when executing Image2Text:

function takes at most 14 arguments (17 given)

File "Q:_ComfyUI_windows_portable_3\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "Q:_ComfyUI_windows_portable_3\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "Q:_ComfyUI_windows_portable_3\ComfyUI\custom_nodes\ComfyUI-0246\utils.py", line 372, in new_func res_value = old_func(*final_args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "Q:_ComfyUI_windows_portable_3\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "Q:_ComfyUI_windows_portable_3\ComfyUI\custom_nodes\Comfyui_image2prompt\image2text.py", line 67, in get_value return (model.answer_question(image,query),) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "Q:_ComfyUI_windows_portable_3\ComfyUI\custom_nodes\Comfyui_image2prompt\internlm_model.py", line 71, in answer_question image.save(image_path) File "Q:_ComfyUI_windows_portable_3\python_embeded\Lib\site-packages\PIL\Image.py", line 2432, in save

Open also for reading ("+"), because TIFF save_all

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "Q:_ComfyUI_windows_portable_3\python_embeded\Lib\site-packages\PIL\JpegImagePlugin.py", line 824, in _save ImageFile._save(im, fp, [("jpeg", (0, 0) + im.size, 0, rawmode)], bufsize) File "Q:_ComfyUI_windows_portable_3\python_embeded\Lib\site-packages\PIL\ImageFile.py", line 517, in _save def _save(im, fp, tile, bufsize=0): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "Q:_ComfyUI_windows_portable_3\python_embeded\Lib\site-packages\PIL\ImageFile.py", line 528, in _encode_tile im.encoderconfig = () ^^^^^^^^^^^^ File "Q:_ComfyUI_windows_portable_3\python_embeded\Lib\site-packages\PIL\Image.py", line 437, in _getencoder return _E(self.scale + other.scale, self.offset + other.offset

BenderBlender commented 8 months ago

Previously it worked fine (in CUDA mode)

BenderBlender commented 8 months ago

I restarted the comfy and it seemed to work (CUDA). Apparently there is some kind of conflict between different nodes. But the CPU mode does not work

image image

ThisModernDay commented 7 months ago

@zhongpei I don't have enough time right now to do a PR unfortunately but This and #4

But the CPU mode does not work

can be resolved by changing line 57 in internlm_model.py

from:

model = model.cpu().float().eval()

to:

model = AutoModelForCausalLM.from_pretrained(
                model_path, 
                torch_dtype=dtype, 
                trust_remote_code=True,
                device_map="cpu"
            ).cpu().float().eval()
zhongpei commented 7 months ago

now fix it