withcatai / node-llama-cpp

Run AI models locally on your machine with node.js bindings for llama.cpp. Force a JSON schema on the model output on the generation level
https://node-llama-cpp.withcat.ai
MIT License
838 stars 82 forks source link

Set temperature request. #14

Closed nigel-daniels closed 1 year ago

nigel-daniels commented 1 year ago

Enhancement requests. I've based a module for langchain on this repo, and wondered if it would be possible to support setting the temperature?

github-actions[bot] commented 1 year ago

:tada: This issue has been resolved in version 2.1.0 :tada:

The release is available on:

Your semantic-release bot :package::rocket: