hangyav / textLSP

Language server for text spell and grammar check with various tools.
GNU General Public License v3.0
47 stars 3 forks source link

Bump the python-packages group with 5 updates #20

Closed dependabot[bot] closed 7 months ago

dependabot[bot] commented 7 months ago

Bumps the python-packages group with 5 updates:

Package From To
gitpython 3.1.41 3.1.42
torch 2.2.0 2.2.1
openai 1.10.0 1.13.3
transformers 4.37.2 4.38.2
pytest 8.0.0 8.0.2

Updates gitpython from 3.1.41 to 3.1.42

Release notes

Sourced from gitpython's releases.

3.1.42

What's Changed

New Contributors

Full Changelog: https://github.com/gitpython-developers/GitPython/compare/3.1.41...3.1.42

Commits
  • 1f37b48 prepare the next release
  • 9caf3ae Merge pull request #1825 from EliahKagan/tree-test
  • 2613421 Merge pull request #1823 from marcm-ml/master
  • b780a8c Tweak @with_rw_directory and go back to using it
  • 0114a99 Use more ligtweight approach to guarantee deletion
  • 90cf4d7 Fix new PermissionError in Windows with Python 3.7
  • dd42e38 Keep temp files out of project dir and improve cleanup
  • 2671167 Remove deprecated section in README.md
  • 7ba3fd2 Bump Vampire/setup-wsl from 2.0.2 to 3.0.0
  • e75ea98 Bump pre-commit/action from 3.0.0 to 3.0.1
  • Additional commits viewable in compare view


Updates torch from 2.2.0 to 2.2.1

Release notes

Sourced from torch's releases.

PyTorch 2.2.1 Release, bug fix release

This release is meant to fix the following issues (regressions / silent correctness):

Release tracker pytorch/pytorch#119295 contains all relevant pull requests related to this release as well as links to related issues.

Commits


Updates openai from 1.10.0 to 1.13.3

Release notes

Sourced from openai's releases.

v1.13.3

1.13.3 (2024-02-28)

Full Changelog: v1.13.2...v1.13.3

Features

Chores

Documentation

v1.13.2

1.13.2 (2024-02-20)

Full Changelog: v1.13.1...v1.13.2

Bug Fixes

  • ci: revert "move github release logic to github app" (#1170) (f1adc2e)

v1.13.1

1.13.1 (2024-02-20)

Full Changelog: v1.13.0...v1.13.1

Chores

v1.13.0

1.13.0 (2024-02-19)

Full Changelog: v1.12.0...v1.13.0

Features

... (truncated)

Changelog

Sourced from openai's changelog.

1.13.3 (2024-02-28)

Full Changelog: v1.13.2...v1.13.3

Features

Chores

Documentation

1.13.2 (2024-02-20)

Full Changelog: v1.13.1...v1.13.2

Bug Fixes

  • ci: revert "move github release logic to github app" (#1170) (f1adc2e)

1.13.1 (2024-02-20)

Full Changelog: v1.13.0...v1.13.1

Chores

1.13.0 (2024-02-19)

Full Changelog: v1.12.0...v1.13.0

Features

Bug Fixes

... (truncated)

Commits


Updates transformers from 4.37.2 to 4.38.2

Release notes

Sourced from transformers's releases.

v4.38.2

Fix backward compatibility issues with Llama and Gemma:

We mostly made sure that performances are not affected by the new change of paradigm with ROPE. Fixed the ROPE computation (should always be in float32) and the causal_mask dtype was set to bool to take less RAM.

YOLOS had a regression, and Llama / T5Tokenizer had a warning popping for random reasons

  • FIX [Gemma] Fix bad rebase with transformers main (#29170)
  • Improve _update_causal_mask performance (#29210)
  • [T5 and Llama Tokenizer] remove warning (#29346)
  • [Llama ROPE] Fix torch export but also slow downs in forward (#29198)
  • RoPE loses precision for Llama / Gemma + Gemma logits.float() (#29285)
  • Patch YOLOS and others (#29353)
  • Use torch.bool instead of torch.int64 for non-persistant causal mask buffer (#29241)

v4.38.1

Fix eager attention in Gemma!

TLDR:

-        attn_output = attn_output.reshape(bsz, q_len, self.hidden_size)
+        attn_output = attn_output.view(bsz, q_len, -1)

v4.38: Gemma, Depth Anything, Stable LM; Static Cache, HF Quantizer, AQLM

New model additions

💎 Gemma 💎

Gemma is a new opensource Language Model series from Google AI that comes with a 2B and 7B variant. The release comes with the pre-trained and instruction fine-tuned versions and you can use them via AutoModelForCausalLM, GemmaForCausalLM or pipeline interface!

Read more about it in the Gemma release blogpost: https://hf.co/blog/gemma

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto", torch_dtype=torch.float16)

input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")

outputs = model.generate(**input_ids)

You can use the model with Flash Attention, SDPA, Static cache and quantization API for further optimizations !

  • Flash Attention 2

... (truncated)

Commits
  • 092f1fd Release 4.38.2
  • bf5163f fix merge conflicts between llama and gemma
  • 6c45f0f Use torch.bool instead of torch.int64 for non-persistant causal mask buff...
  • bfefb8e Patch YOLOS and others (#29353)
  • 20164cc RoPE loses precision for Llama / Gemma + Gemma logits.float() (#29285)
  • d5ec194 [Llama ROPE] Fix torch export but also slow downs in forward (#29198)
  • bf99e86 [T5 and Llama Tokenizer] remove warning (#29346)
  • 6d02350 Improve _update_causal_mask performance (#29210)
  • 4f8689e FIX [Gemma] Fix bad rebase with transformers main (#29170)
  • a085774 Release: v4.38.1
  • Additional commits viewable in compare view


Updates pytest from 8.0.0 to 8.0.2

Release notes

Sourced from pytest's releases.

8.0.2

pytest 8.0.2 (2024-02-24)

Bug Fixes

  • #11895: Fix collection on Windows where initial paths contain the short version of a path (for example c:\PROGRA~1\tests).
  • #11953: Fix an IndexError crash raising from getstatementrange_ast.
  • #12021: Reverted a fix to [--maxfail]{.title-ref} handling in pytest 8.0.0 because it caused a regression in pytest-xdist whereby session fixture teardowns may get executed multiple times when the max-fails is reached.

8.0.1

pytest 8.0.1 (2024-02-16)

Bug Fixes

  • #11875: Correctly handle errors from getpass.getuser{.interpreted-text role="func"} in Python 3.13.
  • #11879: Fix an edge case where ExceptionInfo._stringify_exception could crash pytest.raises{.interpreted-text role="func"}.
  • #11906: Fix regression with pytest.warns{.interpreted-text role="func"} using custom warning subclasses which have more than one parameter in their [__init__]{.title-ref}.
  • #11907: Fix a regression in pytest 8.0.0 whereby calling pytest.skip{.interpreted-text role="func"} and similar control-flow exceptions within a pytest.warns(){.interpreted-text role="func"} block would get suppressed instead of propagating.
  • #11929: Fix a regression in pytest 8.0.0 whereby autouse fixtures defined in a module get ignored by the doctests in the module.
  • #11937: Fix a regression in pytest 8.0.0 whereby items would be collected in reverse order in some circumstances.
Commits
  • 31afeeb Prepare release version 8.0.2
  • 1b00a2f Merge pull request #12025 from pytest-dev/backport-12022-to-8.0.x
  • ff2f66d [8.0.x] Revert "Fix teardown error reporting when --maxfail=1 (#11721)"
  • 8a8eed6 [8.0.x] Fix collection of short paths on Windows (#12024)
  • 74346f0 [8.0.x] Allow Sphinx 7.x (#12005)
  • b7657b4 [8.0.x] Disallow Sphinx 6 and 7 (#12001)
  • feb7c5e Merge pull request #11999 from pytest-dev/backport-11996-to-8.0.x
  • 0909655 [8.0.x] code: fix IndexError crash in getstatementrange_ast
  • 68524d4 Merge pull request #11993 from pytest-dev/release-8.0.1
  • d7d320a Prepare release version 8.0.1
  • Additional commits viewable in compare view


Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore major version` will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself) - `@dependabot ignore minor version` will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself) - `@dependabot ignore ` will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself) - `@dependabot unignore ` will remove all of the ignore conditions of the specified dependency - `@dependabot unignore ` will remove the ignore condition of the specified dependency and ignore conditions