abetlen/llama-cpp-python (llama_cpp_python)
### [`v0.2.69`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0269)
[Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.68...v0.2.69)
- feat: Update llama.cpp to [ggerganov/llama.cpp@`6ecf318`](https://togithub.com/ggerganov/llama.cpp/commit/6ecf3189e00a1e8e737a78b6d10e1d7006e050a2)
- feat: Add llama-3-vision-alpha chat format by [@abetlen](https://togithub.com/abetlen) in [`31b1d95`](https://togithub.com/abetlen/llama-cpp-python/commit/31b1d95a6c19f5b615a3286069f181a415f872e8)
- fix: Change default verbose value of verbose in image chat format handlers to True to match Llama by [@abetlen](https://togithub.com/abetlen) in [`4f01c45`](https://togithub.com/abetlen/llama-cpp-python/commit/4f01c452b6c738dc56eacac3758119b12c57ea94)
- fix: Suppress all logs when verbose=False, use hardcoded fileno's to work in colab notebooks by [@abetlen](https://togithub.com/abetlen) in [`f116175`](https://togithub.com/abetlen/llama-cpp-python/commit/f116175a5a7c84569c88cad231855c1e6e59ff6e)
- fix: UTF-8 handling with grammars by [@jsoma](https://togithub.com/jsoma) in [#1415](https://togithub.com/abetlen/llama-cpp-python/issues/1415)
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Enabled.
â™» Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
[ ] If you want to rebase/retry this PR, check this box
This PR has been generated by Mend Renovate. View repository job log here.
This PR contains the following updates:
==0.2.68
->==0.2.69
Release Notes
abetlen/llama-cpp-python (llama_cpp_python)
### [`v0.2.69`](https://togithub.com/abetlen/llama-cpp-python/blob/HEAD/CHANGELOG.md#0269) [Compare Source](https://togithub.com/abetlen/llama-cpp-python/compare/v0.2.68...v0.2.69) - feat: Update llama.cpp to [ggerganov/llama.cpp@`6ecf318`](https://togithub.com/ggerganov/llama.cpp/commit/6ecf3189e00a1e8e737a78b6d10e1d7006e050a2) - feat: Add llama-3-vision-alpha chat format by [@abetlen](https://togithub.com/abetlen) in [`31b1d95`](https://togithub.com/abetlen/llama-cpp-python/commit/31b1d95a6c19f5b615a3286069f181a415f872e8) - fix: Change default verbose value of verbose in image chat format handlers to True to match Llama by [@abetlen](https://togithub.com/abetlen) in [`4f01c45`](https://togithub.com/abetlen/llama-cpp-python/commit/4f01c452b6c738dc56eacac3758119b12c57ea94) - fix: Suppress all logs when verbose=False, use hardcoded fileno's to work in colab notebooks by [@abetlen](https://togithub.com/abetlen) in [`f116175`](https://togithub.com/abetlen/llama-cpp-python/commit/f116175a5a7c84569c88cad231855c1e6e59ff6e) - fix: UTF-8 handling with grammars by [@jsoma](https://togithub.com/jsoma) in [#1415](https://togithub.com/abetlen/llama-cpp-python/issues/1415)Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Enabled.
â™» Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR has been generated by Mend Renovate. View repository job log here.