ceruleandeep / ComfyUI-LLaVA-Captioner

A ComfyUI extension for chatting with your images with LLaVA. Runs locally, no external services, no filter.
GNU General Public License v3.0
98 stars 11 forks source link

Copied models to ComfyUI\models\llama but they are not found #3

Open patefonas opened 7 months ago

patefonas commented 7 months ago

Copied models to the indicated folder models\llama but when ComfyUI loads, the models could not be found in the nodes, error message:

Prompt outputs failed validation: Required input is missing: model
Required input is missing: mm_proj
Required input is missing: model
Required input is missing: mm_proj
Required input is missing: model
Required input is missing: mm_proj
LlavaCaptioner:
- Required input is missing: model
- Required input is missing: mm_proj
LlavaCaptioner:
- Required input is missing: model
- Required input is missing: mm_proj
LlavaCaptioner:
- Required input is missing: model
- Required input is missing: mm_proj
Gerkinfeltser commented 6 months ago

Yep, I'm seeing this as well on Windows 10 with a standalone comfyui (non-portable). I do have my models on a different drive but even with models\llama in both potential locations the files don't show up.

Sickelmo83 commented 6 months ago

Try moving the files to "custom_nodes\ComfyUI-LLaVA-Captioner\models" Seems like the script is looking in this folder instead of the documented "custom_nodes\ComfyUI-LLaVA-Captioner\models\llama"

ruoxuer commented 2 months ago

将模型放到:custom_nodes\ComfyUI-LLaVA-Captioner\models\llama