[x] This pull request links relevant issues as Fixes #0000
[x] There are new or updated unit tests validating the change
[ ] Documentation has been updated to reflect this change
[x] The new commits and pull request title follow conventions explained in pull request guidelines (PRs that do not follow this convention will not be merged)
Description of change
TemplateChatWrapperOptions
cmake
binary issues and suggest fixes on detectionFailed to detect a default CUDA architecture
CUDA compilation errorllama.cpp
changes to make embedding work againllama.cpp
changes to support mamba modelsllama.cpp
logs before exit.buildMetadata.json
to not start with a dot, to make using this library together with bundlers easierDisposedError
was thrown when calling.dispose()
How to use node-llama-cpp after this change
Regular context
Embedding
Pull-Request Checklist
master
branchnpm run format
to apply eslint formattingnpm run test
passes with this changeFixes #0000