abetlen/llama-cpp-python (llama-cpp-python)
### [`v0.2.83`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0283)
[Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.82...v0.2.83)
- feat: Update llama.cpp to [ggerganov/llama.cpp@`081fe43`](https://togithub.com/ggerganov/llama.cpp/commit/081fe431aa8fb6307145c4feb3eed4f48cab19f8)
- feat: Add 'required' literal to ChatCompletionToolChoiceOption by [@mjschock](https://togithub.com/mjschock) in [#1597](https://togithub.com/abetlen/llama-cpp-python/issues/1597)
- fix: Change repeat_penalty to 1.0 to match llama.cpp defaults by [@ddh0](https://togithub.com/ddh0) in [#1590](https://togithub.com/abetlen/llama-cpp-python/issues/1590)
- fix(docs): Update README.md typo by [@ericcurtin](https://togithub.com/ericcurtin) in [#1589](https://togithub.com/abetlen/llama-cpp-python/issues/1589)
- fix(server): Use split_mode from model settings by [@grider-withourai](https://togithub.com/grider-withourai) in [#1594](https://togithub.com/abetlen/llama-cpp-python/issues/1594)
- feat(ci): Dockerfile update base images and post-install cleanup by [@Smartappli](https://togithub.com/Smartappli) in [#1530](https://togithub.com/abetlen/llama-cpp-python/issues/1530)
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
â™» Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
[ ] If you want to rebase/retry this PR, check this box
This PR contains the following updates:
==0.2.82
->==0.2.83
Release Notes
abetlen/llama-cpp-python (llama-cpp-python)
### [`v0.2.83`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0283) [Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.82...v0.2.83) - feat: Update llama.cpp to [ggerganov/llama.cpp@`081fe43`](https://togithub.com/ggerganov/llama.cpp/commit/081fe431aa8fb6307145c4feb3eed4f48cab19f8) - feat: Add 'required' literal to ChatCompletionToolChoiceOption by [@mjschock](https://togithub.com/mjschock) in [#1597](https://togithub.com/abetlen/llama-cpp-python/issues/1597) - fix: Change repeat_penalty to 1.0 to match llama.cpp defaults by [@ddh0](https://togithub.com/ddh0) in [#1590](https://togithub.com/abetlen/llama-cpp-python/issues/1590) - fix(docs): Update README.md typo by [@ericcurtin](https://togithub.com/ericcurtin) in [#1589](https://togithub.com/abetlen/llama-cpp-python/issues/1589) - fix(server): Use split_mode from model settings by [@grider-withourai](https://togithub.com/grider-withourai) in [#1594](https://togithub.com/abetlen/llama-cpp-python/issues/1594) - feat(ci): Dockerfile update base images and post-install cleanup by [@Smartappli](https://togithub.com/Smartappli) in [#1530](https://togithub.com/abetlen/llama-cpp-python/issues/1530)Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
â™» Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR was generated by Mend Renovate. View the repository job log.