feat: add prompt completion to the Electron example
feat: model compatibility warnings
feat: Functionary v2.llama3 support
feat: parallel function calling with plain Llama 3 Instruct
feat: improve function calling support for default chat wrapper
feat: parallel model downloads
feat: add the electron example to releases
feat: improve the electron example
feat: customStopTriggers for LlamaCompletion
fix: improve CUDA detection on Windows
fix: performance improvements
refactor: make functionCallMessageTemplate an object
chore: adapt to llama.cpp breaking changes
Pull-Request Checklist
[x] Code is up-to-date with the master branch
[x] npm run format to apply eslint formatting
[x] npm run test passes with this change
[x] This pull request links relevant issues as Fixes #0000
[x] There are new or updated unit tests validating the change
[ ] Documentation has been updated to reflect this change
[x] The new commits and pull request title follow conventions explained in pull request guidelines (PRs that do not follow this convention will not be merged)
Description of change
v2.llama3
supportcustomStopTriggers
forLlamaCompletion
functionCallMessageTemplate
an objectllama.cpp
breaking changesPull-Request Checklist
master
branchnpm run format
to apply eslint formattingnpm run test
passes with this changeFixes #0000