Closed renovate[bot] closed 2 months ago
Because you closed this PR without merging, Renovate will ignore this update (==0.2.89
). You will get a PR once a newer version is released. To ignore this dependency forever, add it to the ignoreDeps
array of your Renovate config.
If you accidentally closed this PR, or if you changed your mind: rename this PR to get a fresh replacement PR.
This PR contains the following updates:
==0.2.79
->==0.2.89
Release Notes
abetlen/llama-cpp-python (llama-cpp-python)
### [`v0.2.89`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0289) [Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.88...v0.2.89) - feat: Update llama.cpp to [ggerganov/llama.cpp@`cfac111`](https://togithub.com/ggerganov/llama.cpp/commit/cfac111e2b3953cdb6b0126e67a2487687646971) - fix: Llama.close didn't free lora adapter by [@jkawamoto](https://togithub.com/jkawamoto) in [#1679](https://togithub.com/abetlen/llama-cpp-python/issues/1679) - fix: missing dependencies for test by [@jkawamoto](https://togithub.com/jkawamoto) in [#1680](https://togithub.com/abetlen/llama-cpp-python/issues/1680) ### [`v0.2.88`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0288) [Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.87...v0.2.88) - feat: Update llama.cpp to [ggerganov/llama.cpp@`fc4ca27`](https://togithub.com/ggerganov/llama.cpp/commit/fc4ca27b25464a11b3b86c9dbb5b6ed6065965c2) - fix: only print 'cache saved' in verbose mode by [@lsorber](https://togithub.com/lsorber) in [#1668](https://togithub.com/abetlen/llama-cpp-python/issues/1668) - fix: Added back from_file method to LlamaGrammar by [@ExtReMLapin](https://togithub.com/ExtReMLapin) in [#1673](https://togithub.com/abetlen/llama-cpp-python/issues/1673) - fix: grammar prints on each call by [@abetlen](https://togithub.com/abetlen) in [`0998ea0`](https://togithub.com/abetlen/llama-cpp-python/commit/0998ea0deea076a547d54bd598d6b413b588ee2b) - feat: Enable recursive search of HFFS.ls when using from_pretrained by [@benHeidabetlen](https://togithub.com/benHeidabetlen) in [#1656](https://togithub.com/abetlen/llama-cpp-python/issues/1656) - feat: Add more detailed log for prefix-match by [@xu-song](https://togithub.com/xu-song) in [#1659](https://togithub.com/abetlen/llama-cpp-python/issues/1659) ### [`v0.2.87`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0287) [Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.86...v0.2.87) - feat: Update llama.cpp to [ggerganov/llama.cpp@`be55695`](https://togithub.com/ggerganov/llama.cpp/commit/be55695eff44784a141a863f273661a6bce63dfc) - fix: Include all llama.cpp source files and subdirectories by [@abetlen](https://togithub.com/abetlen) in [`9cad571`](https://togithub.com/abetlen/llama-cpp-python/commit/9cad5714ae6e7c250af8d0bbb179f631368c928b) - feat(ci): Re-build wheel index automatically when releases are created by [@abetlen](https://togithub.com/abetlen) in [`198f47d`](https://togithub.com/abetlen/llama-cpp-python/commit/198f47dc1bd202fd2b71b29e041a9f33fe40bfad) ### [`v0.2.86`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0286) [Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.85...v0.2.86) - feat: Update llama.cpp to [ggerganov/llama.cpp@`398ede5`](https://togithub.com/ggerganov/llama.cpp/commit/398ede5efeb07b9adf9fbda7ea63f630d476a792) - feat: Ported back new grammar changes from C++ to Python implementation by [@ExtReMLapin](https://togithub.com/ExtReMLapin) in ([#1637](https://togithub.com/abetlen/llama-cpp-python/issues/1637)) - fix: llama_grammar_accept_token arg order by [@tc-wolf](https://togithub.com/tc-wolf) in ([#1649](https://togithub.com/abetlen/llama-cpp-python/issues/1649)) ### [`v0.2.85`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0285) [Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.84...v0.2.85) - feat: Update llama.cpp to [ggerganov/llama.cpp@`398ede5`](https://togithub.com/ggerganov/llama.cpp/commit/398ede5efeb07b9adf9fbda7ea63f630d476a792) - fix: Missing LoRA adapter after API change by [@shamitv](https://togithub.com/shamitv) in [#1630](https://togithub.com/abetlen/llama-cpp-python/issues/1630) - fix(docker): Update Dockerfile BLAS options by [@olivierdebauche](https://togithub.com/olivierdebauche) in [#1632](https://togithub.com/abetlen/llama-cpp-python/issues/1632) - fix(docker): Fix GGML_CUDA param by [@olivierdebauche](https://togithub.com/olivierdebauche) in [#1633](https://togithub.com/abetlen/llama-cpp-python/issues/1633) - fix(docker): Update Dockerfile build options from `LLAMA_` to `GGML_` by [@olivierdebauche](https://togithub.com/olivierdebauche) in [#1634](https://togithub.com/abetlen/llama-cpp-python/issues/1634) - feat: FreeBSD compatibility by [@yurivict](https://togithub.com/yurivict) in [#1635](https://togithub.com/abetlen/llama-cpp-python/issues/1635) ### [`v0.2.84`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0284) [Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.83...v0.2.84) - feat: Update llama.cpp to [ggerganov/llama.cpp@`4730fac`](https://togithub.com/ggerganov/llama.cpp/commit/4730faca618ff9cee0780580145e3cbe86f24876) - fix: fix: Correcting run.sh filepath in Simple Docker implementation by [@mashuk999](https://togithub.com/mashuk999) in [#1626](https://togithub.com/abetlen/llama-cpp-python/issues/1626) ### [`v0.2.83`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0283) [Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.82...v0.2.83) - feat: Update llama.cpp to [ggerganov/llama.cpp@`081fe43`](https://togithub.com/ggerganov/llama.cpp/commit/081fe431aa8fb6307145c4feb3eed4f48cab19f8) - feat: Add 'required' literal to ChatCompletionToolChoiceOption by [@mjschock](https://togithub.com/mjschock) in [#1597](https://togithub.com/abetlen/llama-cpp-python/issues/1597) - fix: Change repeat_penalty to 1.0 to match llama.cpp defaults by [@ddh0](https://togithub.com/ddh0) in [#1590](https://togithub.com/abetlen/llama-cpp-python/issues/1590) - fix(docs): Update README.md typo by [@ericcurtin](https://togithub.com/ericcurtin) in [#1589](https://togithub.com/abetlen/llama-cpp-python/issues/1589) - fix(server): Use split_mode from model settings by [@grider-withourai](https://togithub.com/grider-withourai) in [#1594](https://togithub.com/abetlen/llama-cpp-python/issues/1594) - feat(ci): Dockerfile update base images and post-install cleanup by [@Smartappli](https://togithub.com/Smartappli) in [#1530](https://togithub.com/abetlen/llama-cpp-python/issues/1530) ### [`v0.2.82`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0282) [Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.81...v0.2.82) - feat: Update llama.cpp to [ggerganov/llama.cpp@`7fdb6f7`](https://togithub.com/ggerganov/llama.cpp/commit/7fdb6f73e35605c8dbc39e9f19cd9ed84dbc87f2) ### [`v0.2.81`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0281) [Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.80...v0.2.81) - feat: Update llama.cpp to [ggerganov/llama.cpp@`9689673`](https://togithub.com/ggerganov/llama.cpp/commit/968967376dc2c018d29f897c4883d335bbf384fb) - fix(ci): Fix CUDA wheels, use LLAMA_CUDA instead of removed LLAMA_CUBLAS by [@abetlen](https://togithub.com/abetlen) in [`4fb6fc1`](https://togithub.com/abetlen/llama-cpp-python/commit/4fb6fc12a02a68884c25dd9f6a421cacec7604c6) - fix(ci): Fix MacOS release, use macos-12 image instead of removed macos-11 by [@abetlen](https://togithub.com/abetlen) in [`3a551eb`](https://togithub.com/abetlen/llama-cpp-python/commit/3a551eb5263fdbd24b36d7770856374c04e92788) ### [`v0.2.80`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0280) [Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.79...v0.2.80) - feat: Update llama.cpp to [ggerganov/llama.cpp@`023b880`](https://togithub.com/ggerganov/llama.cpp/commit/023b8807e10bc3ade24a255f01c1ad2a01bb4228) - fix(server): Fix bug in FastAPI streaming response where dependency was released before request completes causing SEGFAULT by [@abetlen](https://togithub.com/abetlen) in [`296304b`](https://togithub.com/abetlen/llama-cpp-python/commit/296304b60bb83689659883c9cc24f4c074dd88ff) - fix(server): Update default config value for embeddings to False to fix error in text generation where logits were not allocated by llama.cpp by [@abetlen](https://togithub.com/abetlen) in [`bf5e0bb`](https://togithub.com/abetlen/llama-cpp-python/commit/bf5e0bb4b151f4ca2f5a21af68eb832a96a79d75) - fix(ci): Fix the CUDA workflow by [@oobabooga](https://togithub.com/oobabooga) in [#1551](https://togithub.com/abetlen/llama-cpp-python/issues/1551) - docs: Update readme examples to use newer Qwen2 model by [@jncraton](https://togithub.com/jncraton) in [#1544](https://togithub.com/abetlen/llama-cpp-python/issues/1544)Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR was generated by Mend Renovate. View the repository job log.