abetlen/llama-cpp-python (llama_cpp_python)
### [`v0.2.64`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0264)
[Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.63...v0.2.64)
- feat: Update llama.cpp to [ggerganov/llama.cpp@`4e96a81`](https://togithub.com/ggerganov/llama.cpp/commit/4e96a812b3ce7322a29a3008db2ed73d9087b176)
- feat: Add `llama-3` chat format by [@andreabak](https://togithub.com/andreabak) in [#1371](https://togithub.com/abetlen/llama-cpp-python/issues/1371)
- feat: Use new llama_token_is_eog in create_completions by [@abetlen](https://togithub.com/abetlen) in [`d40a250`](https://togithub.com/abetlen/llama-cpp-python/commit/d40a250ef3cfaa8224d12c83776a2f1de96ae3d1)
- feat(server): Provide ability to dynamically allocate all threads if desired using -1 by [@sean-bailey](https://togithub.com/sean-bailey) in [#1364](https://togithub.com/abetlen/llama-cpp-python/issues/1364)
- ci: Build arm64 wheels by [@gaby](https://togithub.com/gaby) in [`611781f`](https://togithub.com/abetlen/llama-cpp-python/commit/611781f5319719a3d05fefccbbf0cc321742a026)
- fix: Update scikit-build-core build dependency avoid bug in 0.9.1 by [@evelkey](https://togithub.com/evelkey) in [#1370](https://togithub.com/abetlen/llama-cpp-python/issues/1370)
Configuration
π Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
π¦ Automerge: Enabled.
β» Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
π Ignore: Close this PR and you won't be reminded about this update again.
[ ] If you want to rebase/retry this PR, check this box
This PR has been generated by Mend Renovate. View repository job log here.
This PR contains the following updates:
==0.2.63
->==0.2.64
Release Notes
abetlen/llama-cpp-python (llama_cpp_python)
### [`v0.2.64`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0264) [Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.63...v0.2.64) - feat: Update llama.cpp to [ggerganov/llama.cpp@`4e96a81`](https://togithub.com/ggerganov/llama.cpp/commit/4e96a812b3ce7322a29a3008db2ed73d9087b176) - feat: Add `llama-3` chat format by [@andreabak](https://togithub.com/andreabak) in [#1371](https://togithub.com/abetlen/llama-cpp-python/issues/1371) - feat: Use new llama_token_is_eog in create_completions by [@abetlen](https://togithub.com/abetlen) in [`d40a250`](https://togithub.com/abetlen/llama-cpp-python/commit/d40a250ef3cfaa8224d12c83776a2f1de96ae3d1) - feat(server): Provide ability to dynamically allocate all threads if desired using -1 by [@sean-bailey](https://togithub.com/sean-bailey) in [#1364](https://togithub.com/abetlen/llama-cpp-python/issues/1364) - ci: Build arm64 wheels by [@gaby](https://togithub.com/gaby) in [`611781f`](https://togithub.com/abetlen/llama-cpp-python/commit/611781f5319719a3d05fefccbbf0cc321742a026) - fix: Update scikit-build-core build dependency avoid bug in 0.9.1 by [@evelkey](https://togithub.com/evelkey) in [#1370](https://togithub.com/abetlen/llama-cpp-python/issues/1370)Configuration
π Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
π¦ Automerge: Enabled.
β» Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
π Ignore: Close this PR and you won't be reminded about this update again.
This PR has been generated by Mend Renovate. View repository job log here.