abetlen/llama-cpp-python (llama-cpp-python)
### [`v0.2.85`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0285)
[Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.84...v0.2.85)
- feat: Update llama.cpp to [ggerganov/llama.cpp@`398ede5`](https://togithub.com/ggerganov/llama.cpp/commit/398ede5efeb07b9adf9fbda7ea63f630d476a792)
- fix: Missing LoRA adapter after API change by [@shamitv](https://togithub.com/shamitv) in [#1630](https://togithub.com/abetlen/llama-cpp-python/issues/1630)
- fix(docker): Update Dockerfile BLAS options by [@olivierdebauche](https://togithub.com/olivierdebauche) in [#1632](https://togithub.com/abetlen/llama-cpp-python/issues/1632)
- fix(docker): Fix GGML_CUDA param by [@olivierdebauche](https://togithub.com/olivierdebauche) in [#1633](https://togithub.com/abetlen/llama-cpp-python/issues/1633)
- fix(docker): Update Dockerfile build options from `LLAMA_` to `GGML_` by [@olivierdebauche](https://togithub.com/olivierdebauche) in [#1634](https://togithub.com/abetlen/llama-cpp-python/issues/1634)
- feat: FreeBSD compatibility by [@yurivict](https://togithub.com/yurivict) in [#1635](https://togithub.com/abetlen/llama-cpp-python/issues/1635)
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
[ ] If you want to rebase/retry this PR, check this box
This PR contains the following updates:
==0.2.84
->==0.2.85
Release Notes
abetlen/llama-cpp-python (llama-cpp-python)
### [`v0.2.85`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0285) [Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.84...v0.2.85) - feat: Update llama.cpp to [ggerganov/llama.cpp@`398ede5`](https://togithub.com/ggerganov/llama.cpp/commit/398ede5efeb07b9adf9fbda7ea63f630d476a792) - fix: Missing LoRA adapter after API change by [@shamitv](https://togithub.com/shamitv) in [#1630](https://togithub.com/abetlen/llama-cpp-python/issues/1630) - fix(docker): Update Dockerfile BLAS options by [@olivierdebauche](https://togithub.com/olivierdebauche) in [#1632](https://togithub.com/abetlen/llama-cpp-python/issues/1632) - fix(docker): Fix GGML_CUDA param by [@olivierdebauche](https://togithub.com/olivierdebauche) in [#1633](https://togithub.com/abetlen/llama-cpp-python/issues/1633) - fix(docker): Update Dockerfile build options from `LLAMA_` to `GGML_` by [@olivierdebauche](https://togithub.com/olivierdebauche) in [#1634](https://togithub.com/abetlen/llama-cpp-python/issues/1634) - feat: FreeBSD compatibility by [@yurivict](https://togithub.com/yurivict) in [#1635](https://togithub.com/abetlen/llama-cpp-python/issues/1635)Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR was generated by Mend Renovate. View the repository job log.