IuvenisSapiens / ComfyUI_MiniCPM-V-2_6-int4

The implementation of MiniCPM-V-2_6-int4 has been seamlessly integrated into the ComfyUI platform, enabling the support for text-based queries, video queries, single-image queries, and multi-image queries to generate captions or responses.
Apache License 2.0
133 stars 8 forks source link

ComfyUI_MiniCPM-V-2_6-int4

This is an implementation of MiniCPM-V-2_6-int4 by ComfyUI, including support for text-based queries, video queries, single-image queries, and multi-image queries to generate captions or responses.


Recent Updates

By default, this parameter is set to False, which indicates that the model will be unloaded from GPU memory after each prediction is made.

However, if set to True, the model will remain loaded in GPU memory. This is particularly useful when multiple predictions with the same model are needed, eliminating the need to reload it between uses.

This parameter enables the setting of a random seed for the purpose of ensuring reproducibility in results.


Basic Workflow

Chat_with_text_workflow_legacy preview Chat_with_text_workflow_legacy preview Chat_with_text_workflow_polished preview Chat_with_text_workflow_polished preview

Chat_with_video_workflow_legacy preview Chat_with_video_workflow_legacy preview Chat_with_video_workflow_polished preview Chat_with_video_workflow_polished preview

Chat_with_single_image_workflow_legacy preview Chat_with_single_image_workflow_legacy preview Chat_with_single_image_workflow_polished preview Chat_with_single_image_workflow_polished preview

Chat_with_multiple_images_workflow_legacy preview Chat_with_multiple_images_workflow_legacy preview Chat_with_multiple_images_workflow_polished preview Chat_with_multiple_images_workflow_polished preview

Installation

pip install -r requirements.txt

Download Models

All the models will be downloaded automatically when running the workflow if they are not found in the ComfyUI\models\prompt_generator\ directory.