abetlen/llama-cpp-python (llama_cpp_python)
### [`v0.2.56`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0256)
[Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.55...v0.2.56)
- feat: Update llama.cpp to [ggerganov/llama.cpp@`c2101a2`](https://togithub.com/ggerganov/llama.cpp/commit/c2101a2e909ac7c08976d414e64e96c90ee5fa9e)
- feat(server): Add endpoints for tokenize, detokenize and count tokens by [@felipelo](https://togithub.com/felipelo) in [#1136](https://togithub.com/abetlen/llama-cpp-python/issues/1136)
- feat: Switch embed to llama_get_embeddings_seq by [@iamlemec](https://togithub.com/iamlemec) in [#1263](https://togithub.com/abetlen/llama-cpp-python/issues/1263)
- fix: Fixed json strings grammar by blacklisting character control set by [@ExtReMLapin](https://togithub.com/ExtReMLapin) in [`d02a9cf`](https://togithub.com/abetlen/llama-cpp-python/commit/d02a9cf16ff88ad011e2eb1ce29f4d9400f13cd1)
- fix: Check for existence of clip model path by [@kejcao](https://togithub.com/kejcao) in [#1264](https://togithub.com/abetlen/llama-cpp-python/issues/1264)
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Enabled.
â™» Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
[ ] If you want to rebase/retry this PR, check this box
This PR has been generated by Mend Renovate. View repository job log here.
This PR contains the following updates:
==0.2.55
->==0.2.56
Release Notes
abetlen/llama-cpp-python (llama_cpp_python)
### [`v0.2.56`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0256) [Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.55...v0.2.56) - feat: Update llama.cpp to [ggerganov/llama.cpp@`c2101a2`](https://togithub.com/ggerganov/llama.cpp/commit/c2101a2e909ac7c08976d414e64e96c90ee5fa9e) - feat(server): Add endpoints for tokenize, detokenize and count tokens by [@felipelo](https://togithub.com/felipelo) in [#1136](https://togithub.com/abetlen/llama-cpp-python/issues/1136) - feat: Switch embed to llama_get_embeddings_seq by [@iamlemec](https://togithub.com/iamlemec) in [#1263](https://togithub.com/abetlen/llama-cpp-python/issues/1263) - fix: Fixed json strings grammar by blacklisting character control set by [@ExtReMLapin](https://togithub.com/ExtReMLapin) in [`d02a9cf`](https://togithub.com/abetlen/llama-cpp-python/commit/d02a9cf16ff88ad011e2eb1ce29f4d9400f13cd1) - fix: Check for existence of clip model path by [@kejcao](https://togithub.com/kejcao) in [#1264](https://togithub.com/abetlen/llama-cpp-python/issues/1264)Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Enabled.
â™» Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR has been generated by Mend Renovate. View repository job log here.