withcatai / node-llama-cpp

Run AI models locally on your machine with node.js bindings for llama.cpp. Force a JSON schema on the model output on the generation level
https://node-llama-cpp.withcat.ai
MIT License
829 stars 80 forks source link

feat: pass an image as part of the evaluation #88

Open giladgd opened 10 months ago

giladgd commented 10 months ago

When llama.cpp's support for this will be stable

samlhuillier commented 9 months ago

Interested in this kind of multimodal support. Any update on progress?

fozziethebeat commented 9 months ago

Does this encompass adding support for running llava models or should that be a separate feature request? I noticed that llama-cpp-python already includes llava support from llama.cpp so this shouldn't be too hard with setting up the bindings.

giladgd commented 9 months ago

I haven't started working on this yet, but it is planned as part of the roadmap. The plan is to add support for llama.cpp's ability to pass an image to a model, which right now only supports LLaVA.

I'll work on this once llama.cpp's API for this is final, to prevent frequent breaking API changes (unlike what happens on some other libraries)

fozziethebeat commented 9 months ago

Make sense. Hopefully llama.cpp finalizes that API.