withcatai / node-llama-cpp

Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
https://node-llama-cpp.withcat.ai
MIT License
894 stars 86 forks source link

feat(minor): add `--printTimings` option to the `chat` CLI command #138

Closed JonHolman closed 8 months ago

JonHolman commented 8 months ago

Description of change

Add a --printTimings option to the CLI

Pull-Request Checklist

giladgd commented 8 months ago

@JonHolman Thanks for the PR! Can you please add screenshots of what it looks like after your change?

JonHolman commented 8 months ago

@JonHolman Thanks for the PR! Can you please add screenshots of what it looks like after your change?

Screenshot 2024-01-20 at 4 28 00 PM
github-actions[bot] commented 8 months ago

:tada: This PR is included in version 2.8.5 :tada:

The release is available on:

Your semantic-release bot :package::rocket:

github-actions[bot] commented 8 months ago

:tada: This PR is included in version 3.0.0-beta.4 :tada:

The release is available on:

Your semantic-release bot :package::rocket: