alan-turing-institute / reginald

Reginald repository for REG Hack Week 23
3 stars 0 forks source link

Bump llama-cpp-python from 0.2.58 to 0.2.72 #185

Closed dependabot[bot] closed 5 months ago

dependabot[bot] commented 5 months ago

Bumps llama-cpp-python from 0.2.58 to 0.2.72.

Changelog

Sourced from llama-cpp-python's changelog.

[0.2.72]

  • fix(security): Remote Code Execution by Server-Side Template Injection in Model Metadata by @​retr0reg in b454f40a9a1787b2b5659cd2cb00819d983185df
  • fix(security): Update remaining jinja chat templates to use immutable sandbox by @​CISC in #1441

[0.2.71]

  • feat: Update llama.cpp to ggerganov/llama.cpp@911b3900dded9a1cfe0f0e41b82c7a29baf3a217
  • fix: Make leading bos_token optional for image chat formats, fix nanollava system message by @​abetlen in 77122638b4153e31d9f277b3d905c2900b536632
  • fix: free last image embed in llava chat handler by @​abetlen in 3757328b703b2cd32dcbd5853271e3a8c8599fe7

[0.2.70]

  • feat: Update llama.cpp to ggerganov/llama.cpp@c0e6fbf8c380718102bd25fcb8d2e55f8f9480d1
  • feat: fill-in-middle support by @​CISC in #1386
  • fix: adding missing args in create_completion for functionary chat handler by @​skalade in #1430
  • docs: update README.md @​eltociear in #1432
  • fix: chat_format log where auto-detected format prints None by @​balvisio in #1434
  • feat(server): Add support for setting root_path by @​abetlen in 0318702cdc860999ee70f277425edbbfe0e60419
  • feat(ci): Add docker checks and check deps more frequently by @​Smartappli in #1426
  • fix: detokenization case where first token does not start with a leading space by @​noamgat in #1375
  • feat: Implement streaming for Functionary v2 + Bug fixes by @​jeffrey-fong in #1419
  • fix: Use memmove to copy str_value kv_override by @​abetlen in 9f7a85571ae80d3b6ddbd3e1bae407b9f1e3448a
  • feat(server): Remove temperature bounds checks for server by @​abetlen in 0a454bebe67d12a446981eb16028c168ca5faa81
  • fix(server): Propagate flash_attn to model load by @​dthuerck in #1424

[0.2.69]

  • feat: Update llama.cpp to ggerganov/llama.cpp@6ecf3189e00a1e8e737a78b6d10e1d7006e050a2
  • feat: Add llama-3-vision-alpha chat format by @​abetlen in 31b1d95a6c19f5b615a3286069f181a415f872e8
  • fix: Change default verbose value of verbose in image chat format handlers to True to match Llama by @​abetlen in 4f01c452b6c738dc56eacac3758119b12c57ea94
  • fix: Suppress all logs when verbose=False, use hardcoded fileno's to work in colab notebooks by @​abetlen in f116175a5a7c84569c88cad231855c1e6e59ff6e
  • fix: UTF-8 handling with grammars by @​jsoma in #1415

[0.2.68]

  • feat: Update llama.cpp to ggerganov/llama.cpp@77e15bec6217a39be59b9cc83d6b9afb6b0d8167
  • feat: Add option to enable flash_attn to Lllama params and ModelSettings by @​abetlen in 22d77eefd2edaf0148f53374d0cac74d0e25d06e
  • fix(ci): Fix build-and-release.yaml by @​Smartappli in #1413

[0.2.67]

  • fix: Ensure image renders before text in chat formats regardless of message content order by @​abetlen in 3489ef09d3775f4a87fb7114f619e8ba9cb6b656
  • fix(ci): Fix bug in use of upload-artifact failing to merge multiple artifacts into a single release by @​abetlen in d03f15bb73a1d520970357b702a9e7d4cc2a7a62

[0.2.66]

  • feat: Update llama.cpp to ggerganov/llama.cpp@8843a98c2ba97a25e93319a104f9ddfaf83ce4c4
  • feat: Generic Chat Formats, Tool Calling, and Huggingface Pull Support for Multimodal Models (Obsidian, LLaVA1.6, Moondream) by @​abetlen in #1147
  • ci(fix): Workflow actions updates and fix arm64 wheels not included in release by @​Smartappli in #1392

... (truncated)

Commits
  • 4badac3 chore: Bump version
  • 561e880 fix(security): Render all jinja templates in immutable sandbox (#1441)
  • b454f40 Merge pull request from GHSA-56xg-wfcc-g829
  • 5ab40e6 feat: Support multiple chat templates - step 1 (#1396)
  • bf66a28 chore: Bump version
  • 3757328 fix: free last image embed in llava chat handler
  • 7712263 fix: Make leading bos_token optional for image chat formats, fix nanollava sy...
  • 2a39b99 feat: Update llama.cpp
  • 9ce5cb3 chore: Bump version
  • 4a7122d feat: fill-in-middle support (#1386)
  • Additional commits viewable in compare view


Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/alan-turing-institute/reginald/network/alerts).
dependabot[bot] commented 5 months ago

Looks like llama-cpp-python is up-to-date now, so this is no longer needed.