abetlen/llama-cpp-python (llama-cpp-python)
### [`v0.2.78`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0278)
[Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.77...v0.2.78)
- feat: Update llama.cpp to [ggerganov/llama.cpp@`fd5ea0f`](https://togithub.com/ggerganov/llama.cpp/commit/fd5ea0f897ecb3659d6c269ef6f3d833e865ead7)
- fix: Avoid duplicate special tokens in chat formats by [@CISC](https://togithub.com/CISC) in [#1439](https://togithub.com/abetlen/llama-cpp-python/issues/1439)
- fix: fix logprobs when BOS is not present by [@ghorbani](https://togithub.com/ghorbani) in [#1471](https://togithub.com/abetlen/llama-cpp-python/issues/1471)
- feat: adding rpc_servers parameter to Llama class by [@chraac](https://togithub.com/chraac) in [#1477](https://togithub.com/abetlen/llama-cpp-python/issues/1477)
### [`v0.2.77`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0277)
[Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.76...v0.2.77)
- feat: Update llama.cpp to [ggerganov/llama.cpp@`bde7cd3`](https://togithub.com/ggerganov/llama.cpp/commit/bde7cd3cd949c1a85d3a199498ac98e78039d46f)
- fix: string value kv_overrides by [@abetlen](https://togithub.com/abetlen) in [`df45a4b`](https://togithub.com/abetlen/llama-cpp-python/commit/df45a4b3fe46e72664bda87301b318210c6d4782)
- fix: Fix typo in Llama3VisionAlphaChatHandler by [@abetlen](https://togithub.com/abetlen) in [`165b4dc`](https://togithub.com/abetlen/llama-cpp-python/commit/165b4dc6c188f8fda2fc616154e111f710484eba)
- fix: Use numpy recarray for candidates data, fixes bug with temp < 0 by [@abetlen](https://togithub.com/abetlen) in [`af3ed50`](https://togithub.com/abetlen/llama-cpp-python/commit/af3ed503e9ce60fe6b5365031abad4176a3536b3)
fix: Disable Windows+CUDA workaround when compiling for HIPBLAS by Engininja2 in [#1493](https://togithub.com/abetlen/llama-cpp-python/issues/1493)
### [`v0.2.76`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0276)
[Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.75...v0.2.76)
- feat: Update llama.cpp to [ggerganov/llama.cpp@`0df0aa8`](https://togithub.com/ggerganov/llama.cpp/commit/0df0aa8e43c3378975269a51f9b876c8692e70da)
- feat: Improve Llama.eval performance by avoiding list conversion by [@thoughtp0lice](https://togithub.com/thoughtp0lice) in [#1476](https://togithub.com/abetlen/llama-cpp-python/issues/1476)
- example: LLM inference with Ray Serve by [@rgerganov](https://togithub.com/rgerganov) in [#1465](https://togithub.com/abetlen/llama-cpp-python/issues/1465)
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
[ ] If you want to rebase/retry this PR, check this box
This PR has been generated by Mend Renovate. View repository job log here.
This PR contains the following updates:
==0.2.75
->==0.2.78
Release Notes
abetlen/llama-cpp-python (llama-cpp-python)
### [`v0.2.78`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0278) [Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.77...v0.2.78) - feat: Update llama.cpp to [ggerganov/llama.cpp@`fd5ea0f`](https://togithub.com/ggerganov/llama.cpp/commit/fd5ea0f897ecb3659d6c269ef6f3d833e865ead7) - fix: Avoid duplicate special tokens in chat formats by [@CISC](https://togithub.com/CISC) in [#1439](https://togithub.com/abetlen/llama-cpp-python/issues/1439) - fix: fix logprobs when BOS is not present by [@ghorbani](https://togithub.com/ghorbani) in [#1471](https://togithub.com/abetlen/llama-cpp-python/issues/1471) - feat: adding rpc_servers parameter to Llama class by [@chraac](https://togithub.com/chraac) in [#1477](https://togithub.com/abetlen/llama-cpp-python/issues/1477) ### [`v0.2.77`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0277) [Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.76...v0.2.77) - feat: Update llama.cpp to [ggerganov/llama.cpp@`bde7cd3`](https://togithub.com/ggerganov/llama.cpp/commit/bde7cd3cd949c1a85d3a199498ac98e78039d46f) - fix: string value kv_overrides by [@abetlen](https://togithub.com/abetlen) in [`df45a4b`](https://togithub.com/abetlen/llama-cpp-python/commit/df45a4b3fe46e72664bda87301b318210c6d4782) - fix: Fix typo in Llama3VisionAlphaChatHandler by [@abetlen](https://togithub.com/abetlen) in [`165b4dc`](https://togithub.com/abetlen/llama-cpp-python/commit/165b4dc6c188f8fda2fc616154e111f710484eba) - fix: Use numpy recarray for candidates data, fixes bug with temp < 0 by [@abetlen](https://togithub.com/abetlen) in [`af3ed50`](https://togithub.com/abetlen/llama-cpp-python/commit/af3ed503e9ce60fe6b5365031abad4176a3536b3) fix: Disable Windows+CUDA workaround when compiling for HIPBLAS by Engininja2 in [#1493](https://togithub.com/abetlen/llama-cpp-python/issues/1493) ### [`v0.2.76`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0276) [Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.75...v0.2.76) - feat: Update llama.cpp to [ggerganov/llama.cpp@`0df0aa8`](https://togithub.com/ggerganov/llama.cpp/commit/0df0aa8e43c3378975269a51f9b876c8692e70da) - feat: Improve Llama.eval performance by avoiding list conversion by [@thoughtp0lice](https://togithub.com/thoughtp0lice) in [#1476](https://togithub.com/abetlen/llama-cpp-python/issues/1476) - example: LLM inference with Ray Serve by [@rgerganov](https://togithub.com/rgerganov) in [#1465](https://togithub.com/abetlen/llama-cpp-python/issues/1465)Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR has been generated by Mend Renovate. View repository job log here.