issues
search
ceruleandeep
/
ComfyUI-LLaVA-Captioner
A ComfyUI extension for chatting with your images with LLaVA. Runs locally, no external services, no filter.
GNU General Public License v3.0
82
stars
10
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
fix installation on macos
#16
martenbiehl
closed
1 day ago
1
Add Github Action for Publishing to Comfy Registry
#15
haohaocreates
opened
2 months ago
0
Add pyproject.toml for Custom Node Registry
#14
haohaocreates
opened
2 months ago
2
Feature request: Output list of strings
#13
Battleshack
opened
2 months ago
1
Llava Next
#12
jjohare
opened
4 months ago
0
[Bug] LLaVA Captioner appears to leak VRAM
#11
curiousjp
opened
4 months ago
5
Feature Request
#10
filliptm
opened
5 months ago
0
bugfix: failed to install on MacBook with M2 CPU
#9
ytfei
opened
5 months ago
0
Fix ./models/llama path
#8
julien-blanchon
opened
5 months ago
0
error, failed to create llama_context
#7
altruios
opened
5 months ago
0
Error when using load img list
#6
suede299
opened
5 months ago
1
Error occurred when executing LlavaCaptioner:
#5
philip-shen
closed
5 months ago
1
failing to install llama
#4
alenknight
opened
5 months ago
4
Copied models to ComfyUI\models\llama but they are not found
#3
patefonas
opened
5 months ago
3
model reading question
#2
soldivelot
closed
5 months ago
0
No module named 'llama_cpp'
#1
theonetwoone
opened
7 months ago
8