abetlen/llama-cpp-python (llama_cpp_python)
### [`v0.2.43`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0243)
[Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.42...v0.2.43)
- feat: Update llama.cpp to [ggerganov/llama.cpp@`8084d55`](https://togithub.com/ggerganov/llama.cpp/commit/8084d554406b767d36b3250b3b787462d5dd626f)
- feat: Support batch embeddings by [@iamlemec](https://togithub.com/iamlemec) in [#1186](https://togithub.com/abetlen/llama-cpp-python/issues/1186)
- fix: submodule kompute is not included in sdist by [@abetlen](https://togithub.com/abetlen) in [`7dbbfde`](https://togithub.com/abetlen/llama-cpp-python/commit/7dbbfdecadebe7750be650d9409959640ff9a460)
- fix: fix: Update openbuddy prompt format by [@abetlen](https://togithub.com/abetlen) in [`07a7837`](https://togithub.com/abetlen/llama-cpp-python/commit/07a783779a62a4aac0b11161c7e0eb983ff215f8)
### [`v0.2.42`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0242)
[Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.41...v0.2.42)
- feat: Update llama.cpp to [ggerganov/llama.cpp@`ea9c8e1`](https://togithub.com/ggerganov/llama.cpp/commit/ea9c8e11436ad50719987fa23a289c74b7b40d40)
- fix: sample idx off-by-one error for logit_processors by [@lapp0](https://togithub.com/lapp0) in [#1179](https://togithub.com/abetlen/llama-cpp-python/issues/1179)
- fix: chat formatting bugs in `chatml-function-calling` by [@abetlen](https://togithub.com/abetlen) in [`4b0e332`](https://togithub.com/abetlen/llama-cpp-python/commit/4b0e3320bd8c2c209e29978d0b21e2e471cc9ee3) and [`68fb71b`](https://togithub.com/abetlen/llama-cpp-python/commit/68fb71b6a26a1e57331868f959b47ab4b87851e1)
### [`v0.2.41`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0241)
[Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.40...v0.2.41)
- feat: Update llama.cpp to [ggerganov/llama.cpp@`895407f`](https://togithub.com/ggerganov/llama.cpp/commit/895407f31b358e3d9335e847d13f033491ec8a5b)
- fix: Don't change order of json schema object properties in generated grammar unless prop_order is passed by [@abetlen](https://togithub.com/abetlen) in [`d1822fe`](https://togithub.com/abetlen/llama-cpp-python/commit/d1822fed6b706f38bd1ff0de4dec5baaa3cf84fa)
### [`v0.2.40`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0240)
[Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.39...v0.2.40)
- feat: Update llama.cpp to [ggerganov/llama.cpp@`3bdc4cd`](https://togithub.com/ggerganov/llama.cpp/commit/3bdc4cd0f595a6096cca4a64aa75ffa8a3503465)
- feat: Generic chatml Function Calling using chat_format="chatml-function-calling"\` by [@abetlen](https://togithub.com/abetlen) in [#957](https://togithub.com/abetlen/llama-cpp-python/issues/957)
- fix: Circular dependancy preventing early Llama object free by [@notwa](https://togithub.com/notwa) in [#1176](https://togithub.com/abetlen/llama-cpp-python/issues/1176)
- docs: Set the correct command for compiling with syscl support by [@akarshanbiswas](https://togithub.com/akarshanbiswas) in [#1172](https://togithub.com/abetlen/llama-cpp-python/issues/1172)
- feat: use gpu backend for clip if available by [@iamlemec](https://togithub.com/iamlemec) in [#1175](https://togithub.com/abetlen/llama-cpp-python/issues/1175)
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Enabled.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
[ ] If you want to rebase/retry this PR, check this box
This PR has been generated by Mend Renovate. View repository job log here.
This PR contains the following updates:
==0.2.39
->==0.2.43
Release Notes
abetlen/llama-cpp-python (llama_cpp_python)
### [`v0.2.43`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0243) [Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.42...v0.2.43) - feat: Update llama.cpp to [ggerganov/llama.cpp@`8084d55`](https://togithub.com/ggerganov/llama.cpp/commit/8084d554406b767d36b3250b3b787462d5dd626f) - feat: Support batch embeddings by [@iamlemec](https://togithub.com/iamlemec) in [#1186](https://togithub.com/abetlen/llama-cpp-python/issues/1186) - fix: submodule kompute is not included in sdist by [@abetlen](https://togithub.com/abetlen) in [`7dbbfde`](https://togithub.com/abetlen/llama-cpp-python/commit/7dbbfdecadebe7750be650d9409959640ff9a460) - fix: fix: Update openbuddy prompt format by [@abetlen](https://togithub.com/abetlen) in [`07a7837`](https://togithub.com/abetlen/llama-cpp-python/commit/07a783779a62a4aac0b11161c7e0eb983ff215f8) ### [`v0.2.42`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0242) [Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.41...v0.2.42) - feat: Update llama.cpp to [ggerganov/llama.cpp@`ea9c8e1`](https://togithub.com/ggerganov/llama.cpp/commit/ea9c8e11436ad50719987fa23a289c74b7b40d40) - fix: sample idx off-by-one error for logit_processors by [@lapp0](https://togithub.com/lapp0) in [#1179](https://togithub.com/abetlen/llama-cpp-python/issues/1179) - fix: chat formatting bugs in `chatml-function-calling` by [@abetlen](https://togithub.com/abetlen) in [`4b0e332`](https://togithub.com/abetlen/llama-cpp-python/commit/4b0e3320bd8c2c209e29978d0b21e2e471cc9ee3) and [`68fb71b`](https://togithub.com/abetlen/llama-cpp-python/commit/68fb71b6a26a1e57331868f959b47ab4b87851e1) ### [`v0.2.41`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0241) [Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.40...v0.2.41) - feat: Update llama.cpp to [ggerganov/llama.cpp@`895407f`](https://togithub.com/ggerganov/llama.cpp/commit/895407f31b358e3d9335e847d13f033491ec8a5b) - fix: Don't change order of json schema object properties in generated grammar unless prop_order is passed by [@abetlen](https://togithub.com/abetlen) in [`d1822fe`](https://togithub.com/abetlen/llama-cpp-python/commit/d1822fed6b706f38bd1ff0de4dec5baaa3cf84fa) ### [`v0.2.40`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0240) [Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.39...v0.2.40) - feat: Update llama.cpp to [ggerganov/llama.cpp@`3bdc4cd`](https://togithub.com/ggerganov/llama.cpp/commit/3bdc4cd0f595a6096cca4a64aa75ffa8a3503465) - feat: Generic chatml Function Calling using chat_format="chatml-function-calling"\` by [@abetlen](https://togithub.com/abetlen) in [#957](https://togithub.com/abetlen/llama-cpp-python/issues/957) - fix: Circular dependancy preventing early Llama object free by [@notwa](https://togithub.com/notwa) in [#1176](https://togithub.com/abetlen/llama-cpp-python/issues/1176) - docs: Set the correct command for compiling with syscl support by [@akarshanbiswas](https://togithub.com/akarshanbiswas) in [#1172](https://togithub.com/abetlen/llama-cpp-python/issues/1172) - feat: use gpu backend for clip if available by [@iamlemec](https://togithub.com/iamlemec) in [#1175](https://togithub.com/abetlen/llama-cpp-python/issues/1175)Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Enabled.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR has been generated by Mend Renovate. View repository job log here.