withcatai / node-llama-cpp

Run AI models locally on your machine with node.js bindings for llama.cpp. Force a JSON schema on the model output on the generation level
https://node-llama-cpp.withcat.ai
MIT License
829 stars 80 forks source link

feat: hide llama.cpp logs #106

Closed ExposedCat closed 7 months ago

ExposedCat commented 9 months ago

Feature Description

Hide all the logs produced by llama.cpp about model running/parameters/etc

The Solution

Either ignore output or process it and receive as a callback when running model internally

Considered Alternatives

In general non-optional logging in (any) libraries is a bad practice

Additional Context

Thanks for your work ❤️

Related Features to This Feature Request

Are you willing to resolve this issue by submitting a Pull Request?

Yes, I have the time, and I know how to start.

giladgd commented 9 months ago

I already plan to try to do it as part of v3 (#105)

This library depends on llama.cpp's support for disabling logs. As far as I've seen, it's possible to disable the logs of llama.cpp with a specific build flag, so I think I'll then add a flag to the download and build commands to enable logs, and disable it by default.

github-actions[bot] commented 7 months ago

:tada: This issue has been resolved in version 3.0.0-beta.6 :tada:

The release is available on:

Your semantic-release bot :package::rocket: