withcatai / node-llama-cpp

Run AI models locally on your machine with node.js bindings for llama.cpp. Force a JSON schema on the model output on the generation level
https://node-llama-cpp.withcat.ai
MIT License
829 stars 80 forks source link

Set repeat-penalty request #57

Closed v4lentin1879 closed 11 months ago

v4lentin1879 commented 11 months ago

Feature Description

It would be nice to be able to pass the repeat-penalty to the model too.

The Solution

Simply add a new parameter to pass for the repeat-penalty

Considered Alternatives

Tried doing it myself but the repo won't compile properly

Additional Context

No response

Related Features to This Feature Request

Are you willing to resolve this issue by submitting a Pull Request?

Yes, I have the time, but I don't know how to start. I would need guidance.

github-actions[bot] commented 11 months ago

:tada: This issue has been resolved in version 2.6.0 :tada:

The release is available on:

Your semantic-release bot :package::rocket: