Closed scaruslooner closed 1 month ago
I still need to figure out how to make this integrate better with the ComfyUI memory management so it can unload as needed, but I'm not sure that's the issue here... Did you mean to send a batch of 217 images?
Yea i tried to load a video
Well just tried image and it works. My bad i thought this took video
It's capable of taking multiple images in the input, but that would use a lot of tokens, which explains why it ran out of memory. Makes me wonder if it could work on a tiny gif or something. But I don't think it was trained on video data anyway
gotcha i thought it was like the qwen 2 vl model or minicpm-v that can take videos in
I keep getting allocation on device, i have tried removing my comfy arguments '--fast --normalram' and that didnt work. Im on 4060ti 16gb with ram: 32 gb
Loading Pixtral model: pixtral-12b-nf4 Batch of torch.Size([217, 852, 480, 3]) images Prompt tokens: 8 !!! Exception during processing !!! Allocation on device Traceback (most recent call last): File "Z:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "Z:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "Z:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "Z:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "Z:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-PixtralLlamaVision\nodes.py", line 139, in generate_text generate_ids = pixtral_model['model'].generate( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "Z:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context return func(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "Z:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\generation\utils.py", line 2048, in generate result = self._sample( ^^^^^^^^^^^^^ File "Z:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\generation\utils.py", line 3008, in _sample outputs = self(model_inputs, return_dict=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "Z:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "Z:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "Z:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\accelerate\hooks.py", line 169, in new_forward output = module._old_forward(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "Z:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\models\llava\modeling_llava.py", line 453, in forward image_outputs = self.vision_tower(pixel_values, output_hidden_states=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "Z:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "Z:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "Z:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\accelerate\hooks.py", line 169, in new_forward output = module._old_forward(args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "Z:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\models\pixtral\modeling_pixtral.py", line 504, in forward attention_mask = generate_block_attention_mask( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "Z:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\models\pixtral\modeling_pixtral.py", line 444, in generate_block_attention_mask causal_mask = torch.full((seq_len, seq_len), fill_value=d_min, dtype=dtype, device=device) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ torch.OutOfMemoryError: Allocation on device