abetlen/llama-cpp-python (llama_cpp_python)
### [`v0.2.89`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0289)
[Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.88...v0.2.89)
- feat: Update llama.cpp to [ggerganov/llama.cpp@`cfac111`](https://togithub.com/ggerganov/llama.cpp/commit/cfac111e2b3953cdb6b0126e67a2487687646971)
- fix: Llama.close didn't free lora adapter by [@jkawamoto](https://togithub.com/jkawamoto) in [#1679](https://togithub.com/abetlen/llama-cpp-python/issues/1679)
- fix: missing dependencies for test by [@jkawamoto](https://togithub.com/jkawamoto) in [#1680](https://togithub.com/abetlen/llama-cpp-python/issues/1680)
### [`v0.2.88`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0288)
[Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.87...v0.2.88)
- feat: Update llama.cpp to [ggerganov/llama.cpp@`fc4ca27`](https://togithub.com/ggerganov/llama.cpp/commit/fc4ca27b25464a11b3b86c9dbb5b6ed6065965c2)
- fix: only print 'cache saved' in verbose mode by [@lsorber](https://togithub.com/lsorber) in [#1668](https://togithub.com/abetlen/llama-cpp-python/issues/1668)
- fix: Added back from_file method to LlamaGrammar by [@ExtReMLapin](https://togithub.com/ExtReMLapin) in [#1673](https://togithub.com/abetlen/llama-cpp-python/issues/1673)
- fix: grammar prints on each call by [@abetlen](https://togithub.com/abetlen) in [`0998ea0`](https://togithub.com/abetlen/llama-cpp-python/commit/0998ea0deea076a547d54bd598d6b413b588ee2b)
- feat: Enable recursive search of HFFS.ls when using from_pretrained by [@benHeidabetlen](https://togithub.com/benHeidabetlen) in [#1656](https://togithub.com/abetlen/llama-cpp-python/issues/1656)
- feat: Add more detailed log for prefix-match by [@xu-song](https://togithub.com/xu-song) in [#1659](https://togithub.com/abetlen/llama-cpp-python/issues/1659)
### [`v0.2.87`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0287)
[Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.86...v0.2.87)
- feat: Update llama.cpp to [ggerganov/llama.cpp@`be55695`](https://togithub.com/ggerganov/llama.cpp/commit/be55695eff44784a141a863f273661a6bce63dfc)
- fix: Include all llama.cpp source files and subdirectories by [@abetlen](https://togithub.com/abetlen) in [`9cad571`](https://togithub.com/abetlen/llama-cpp-python/commit/9cad5714ae6e7c250af8d0bbb179f631368c928b)
- feat(ci): Re-build wheel index automatically when releases are created by [@abetlen](https://togithub.com/abetlen) in [`198f47d`](https://togithub.com/abetlen/llama-cpp-python/commit/198f47dc1bd202fd2b71b29e041a9f33fe40bfad)
### [`v0.2.86`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0286)
[Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.85...v0.2.86)
- feat: Update llama.cpp to [ggerganov/llama.cpp@`398ede5`](https://togithub.com/ggerganov/llama.cpp/commit/398ede5efeb07b9adf9fbda7ea63f630d476a792)
- feat: Ported back new grammar changes from C++ to Python implementation by [@ExtReMLapin](https://togithub.com/ExtReMLapin) in ([#1637](https://togithub.com/abetlen/llama-cpp-python/issues/1637))
- fix: llama_grammar_accept_token arg order by [@tc-wolf](https://togithub.com/tc-wolf) in ([#1649](https://togithub.com/abetlen/llama-cpp-python/issues/1649))
### [`v0.2.85`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0285)
[Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.84...v0.2.85)
- feat: Update llama.cpp to [ggerganov/llama.cpp@`398ede5`](https://togithub.com/ggerganov/llama.cpp/commit/398ede5efeb07b9adf9fbda7ea63f630d476a792)
- fix: Missing LoRA adapter after API change by [@shamitv](https://togithub.com/shamitv) in [#1630](https://togithub.com/abetlen/llama-cpp-python/issues/1630)
- fix(docker): Update Dockerfile BLAS options by [@olivierdebauche](https://togithub.com/olivierdebauche) in [#1632](https://togithub.com/abetlen/llama-cpp-python/issues/1632)
- fix(docker): Fix GGML_CUDA param by [@olivierdebauche](https://togithub.com/olivierdebauche) in [#1633](https://togithub.com/abetlen/llama-cpp-python/issues/1633)
- fix(docker): Update Dockerfile build options from `LLAMA_` to `GGML_` by [@olivierdebauche](https://togithub.com/olivierdebauche) in [#1634](https://togithub.com/abetlen/llama-cpp-python/issues/1634)
- feat: FreeBSD compatibility by [@yurivict](https://togithub.com/yurivict) in [#1635](https://togithub.com/abetlen/llama-cpp-python/issues/1635)
### [`v0.2.84`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0284)
[Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.83...v0.2.84)
- feat: Update llama.cpp to [ggerganov/llama.cpp@`4730fac`](https://togithub.com/ggerganov/llama.cpp/commit/4730faca618ff9cee0780580145e3cbe86f24876)
- fix: fix: Correcting run.sh filepath in Simple Docker implementation by [@mashuk999](https://togithub.com/mashuk999) in [#1626](https://togithub.com/abetlen/llama-cpp-python/issues/1626)
### [`v0.2.83`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0283)
[Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.82...v0.2.83)
- feat: Update llama.cpp to [ggerganov/llama.cpp@`081fe43`](https://togithub.com/ggerganov/llama.cpp/commit/081fe431aa8fb6307145c4feb3eed4f48cab19f8)
- feat: Add 'required' literal to ChatCompletionToolChoiceOption by [@mjschock](https://togithub.com/mjschock) in [#1597](https://togithub.com/abetlen/llama-cpp-python/issues/1597)
- fix: Change repeat_penalty to 1.0 to match llama.cpp defaults by [@ddh0](https://togithub.com/ddh0) in [#1590](https://togithub.com/abetlen/llama-cpp-python/issues/1590)
- fix(docs): Update README.md typo by [@ericcurtin](https://togithub.com/ericcurtin) in [#1589](https://togithub.com/abetlen/llama-cpp-python/issues/1589)
- fix(server): Use split_mode from model settings by [@grider-withourai](https://togithub.com/grider-withourai) in [#1594](https://togithub.com/abetlen/llama-cpp-python/issues/1594)
- feat(ci): Dockerfile update base images and post-install cleanup by [@Smartappli](https://togithub.com/Smartappli) in [#1530](https://togithub.com/abetlen/llama-cpp-python/issues/1530)
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Enabled.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
[ ] If you want to rebase/retry this PR, check this box
This PR contains the following updates:
==0.2.82
->==0.2.89
Release Notes
abetlen/llama-cpp-python (llama_cpp_python)
### [`v0.2.89`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0289) [Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.88...v0.2.89) - feat: Update llama.cpp to [ggerganov/llama.cpp@`cfac111`](https://togithub.com/ggerganov/llama.cpp/commit/cfac111e2b3953cdb6b0126e67a2487687646971) - fix: Llama.close didn't free lora adapter by [@jkawamoto](https://togithub.com/jkawamoto) in [#1679](https://togithub.com/abetlen/llama-cpp-python/issues/1679) - fix: missing dependencies for test by [@jkawamoto](https://togithub.com/jkawamoto) in [#1680](https://togithub.com/abetlen/llama-cpp-python/issues/1680) ### [`v0.2.88`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0288) [Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.87...v0.2.88) - feat: Update llama.cpp to [ggerganov/llama.cpp@`fc4ca27`](https://togithub.com/ggerganov/llama.cpp/commit/fc4ca27b25464a11b3b86c9dbb5b6ed6065965c2) - fix: only print 'cache saved' in verbose mode by [@lsorber](https://togithub.com/lsorber) in [#1668](https://togithub.com/abetlen/llama-cpp-python/issues/1668) - fix: Added back from_file method to LlamaGrammar by [@ExtReMLapin](https://togithub.com/ExtReMLapin) in [#1673](https://togithub.com/abetlen/llama-cpp-python/issues/1673) - fix: grammar prints on each call by [@abetlen](https://togithub.com/abetlen) in [`0998ea0`](https://togithub.com/abetlen/llama-cpp-python/commit/0998ea0deea076a547d54bd598d6b413b588ee2b) - feat: Enable recursive search of HFFS.ls when using from_pretrained by [@benHeidabetlen](https://togithub.com/benHeidabetlen) in [#1656](https://togithub.com/abetlen/llama-cpp-python/issues/1656) - feat: Add more detailed log for prefix-match by [@xu-song](https://togithub.com/xu-song) in [#1659](https://togithub.com/abetlen/llama-cpp-python/issues/1659) ### [`v0.2.87`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0287) [Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.86...v0.2.87) - feat: Update llama.cpp to [ggerganov/llama.cpp@`be55695`](https://togithub.com/ggerganov/llama.cpp/commit/be55695eff44784a141a863f273661a6bce63dfc) - fix: Include all llama.cpp source files and subdirectories by [@abetlen](https://togithub.com/abetlen) in [`9cad571`](https://togithub.com/abetlen/llama-cpp-python/commit/9cad5714ae6e7c250af8d0bbb179f631368c928b) - feat(ci): Re-build wheel index automatically when releases are created by [@abetlen](https://togithub.com/abetlen) in [`198f47d`](https://togithub.com/abetlen/llama-cpp-python/commit/198f47dc1bd202fd2b71b29e041a9f33fe40bfad) ### [`v0.2.86`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0286) [Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.85...v0.2.86) - feat: Update llama.cpp to [ggerganov/llama.cpp@`398ede5`](https://togithub.com/ggerganov/llama.cpp/commit/398ede5efeb07b9adf9fbda7ea63f630d476a792) - feat: Ported back new grammar changes from C++ to Python implementation by [@ExtReMLapin](https://togithub.com/ExtReMLapin) in ([#1637](https://togithub.com/abetlen/llama-cpp-python/issues/1637)) - fix: llama_grammar_accept_token arg order by [@tc-wolf](https://togithub.com/tc-wolf) in ([#1649](https://togithub.com/abetlen/llama-cpp-python/issues/1649)) ### [`v0.2.85`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0285) [Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.84...v0.2.85) - feat: Update llama.cpp to [ggerganov/llama.cpp@`398ede5`](https://togithub.com/ggerganov/llama.cpp/commit/398ede5efeb07b9adf9fbda7ea63f630d476a792) - fix: Missing LoRA adapter after API change by [@shamitv](https://togithub.com/shamitv) in [#1630](https://togithub.com/abetlen/llama-cpp-python/issues/1630) - fix(docker): Update Dockerfile BLAS options by [@olivierdebauche](https://togithub.com/olivierdebauche) in [#1632](https://togithub.com/abetlen/llama-cpp-python/issues/1632) - fix(docker): Fix GGML_CUDA param by [@olivierdebauche](https://togithub.com/olivierdebauche) in [#1633](https://togithub.com/abetlen/llama-cpp-python/issues/1633) - fix(docker): Update Dockerfile build options from `LLAMA_` to `GGML_` by [@olivierdebauche](https://togithub.com/olivierdebauche) in [#1634](https://togithub.com/abetlen/llama-cpp-python/issues/1634) - feat: FreeBSD compatibility by [@yurivict](https://togithub.com/yurivict) in [#1635](https://togithub.com/abetlen/llama-cpp-python/issues/1635) ### [`v0.2.84`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0284) [Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.83...v0.2.84) - feat: Update llama.cpp to [ggerganov/llama.cpp@`4730fac`](https://togithub.com/ggerganov/llama.cpp/commit/4730faca618ff9cee0780580145e3cbe86f24876) - fix: fix: Correcting run.sh filepath in Simple Docker implementation by [@mashuk999](https://togithub.com/mashuk999) in [#1626](https://togithub.com/abetlen/llama-cpp-python/issues/1626) ### [`v0.2.83`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0283) [Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.82...v0.2.83) - feat: Update llama.cpp to [ggerganov/llama.cpp@`081fe43`](https://togithub.com/ggerganov/llama.cpp/commit/081fe431aa8fb6307145c4feb3eed4f48cab19f8) - feat: Add 'required' literal to ChatCompletionToolChoiceOption by [@mjschock](https://togithub.com/mjschock) in [#1597](https://togithub.com/abetlen/llama-cpp-python/issues/1597) - fix: Change repeat_penalty to 1.0 to match llama.cpp defaults by [@ddh0](https://togithub.com/ddh0) in [#1590](https://togithub.com/abetlen/llama-cpp-python/issues/1590) - fix(docs): Update README.md typo by [@ericcurtin](https://togithub.com/ericcurtin) in [#1589](https://togithub.com/abetlen/llama-cpp-python/issues/1589) - fix(server): Use split_mode from model settings by [@grider-withourai](https://togithub.com/grider-withourai) in [#1594](https://togithub.com/abetlen/llama-cpp-python/issues/1594) - feat(ci): Dockerfile update base images and post-install cleanup by [@Smartappli](https://togithub.com/Smartappli) in [#1530](https://togithub.com/abetlen/llama-cpp-python/issues/1530)Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Enabled.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR was generated by Mend Renovate. View the repository job log.