#12446: Avoid calling @property (and other instance descriptors) during fixture discovery -- by asottile{.interpreted-text role="user"}
#12659: Fixed the issue of not displaying assertion failure differences when using the parameter --import-mode=importlib in pytest>=8.1.
#12667: Fixed a regression where type change in [ExceptionInfo.errisinstance]{.title-ref} caused [mypy]{.title-ref} to fail.
#12744: Fixed typing compatibility with Python 3.9 or less -- replaced [typing.Self]{.title-ref} with [typing_extensions.Self]{.title-ref} -- by Avasam{.interpreted-text role="user"}
#12745: Fixed an issue with backslashes being incorrectly converted in nodeid paths on Windows, ensuring consistent path handling across environments.
#6682: Fixed bug where the verbosity levels where not being respected when printing the "msg" part of failed assertion (as in assert condition, msg).
#9422: Fix bug where disabling the terminal plugin via -p no:terminal would cause crashes related to missing the verbose option.
-- by GTowers1{.interpreted-text role="user"}
Improved documentation
#12663: Clarify that the [pytest_deselected]{.title-ref} hook should be called from [pytest_collection_modifyitems]{.title-ref} hook implementations when items are deselected.
#12678: Remove erroneous quotes from [tmp_path_retention_policy]{.title-ref} example in docs.
Miscellaneous internal changes
#12769: Fix typos discovered by codespell and add codespell to pre-commit hooks.
Commits
d0f136f build(deps): Bump pypa/gh-action-pypi-publish from 1.10.0 to 1.10.1 (#12790)
This release is meant to fix the following issues (regressions / silent correctness):
Breaking Changes:
The pytorch/pytorch docker image now installs the PyTorch package through pip and has switch its conda installation from miniconda to miniforge (#134274)
Windows:
Fix performance regression on Windows related to MKL static linking (#130619) (#130697)
Fix error during loading on Windows: [WinError 126] The specified module could not be found. (#131662) (#130697)
Fix error when using torch.utils.flop_counter.FlopCounterMode (#134467)
Tracked Regressions:
The experimental remote caching feature for Inductor's autotuner (enabled via TORCHINDUCTOR_AUTOTUNE_REMOTE_CACHE) is known to still be broken in this release and actively worked on in main. Following Error is generated: redis.exceptions.DataError: Invalid input of type: 'dict'. Please use nightlies if you need this feature (reported and Fixed by PR: #134032)
Release tracker #132400 contains all relevant pull requests related to this release as well as links to related issues.
Commits
ee1b680 [Doc] Fix rendering of the unicode characters (#134695)
79c8867 Fix docstring for torch.signal.windows.nuttall (#134704)
The Llama 3.2-Vision collection of multimodal large language models (LLMs) is a collection of pretrained and instruction-tuned image reasoning generative models in 11B and 90B sizes (text + images in / text out). The Llama 3.2-Vision instruction-tuned models are optimized for visual recognition, image reasoning, captioning, and answering general questions about an image. The models outperform many of the available open source and closed multimodal models on common industry benchmarks.
The Qwen2-VL is a major update from the previous Qwen-VL by the Qwen team.
An extract from the Qwen2-VL blogpost available here is as follows:
Qwen2-VL is the latest version of the vision language models based on Qwen2 in the Qwen model familities. Compared with Qwen-VL, Qwen2-VL has the capabilities of:
SoTA understanding of images of various resolution & ratio: Qwen2-VL achieves state-of-the-art performance on visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc.
Understanding videos of 20min+: Qwen2-VL can understand videos over 20 minutes for high-quality video-based question answering, dialog, content creation, etc.
Agent that can operate your mobiles, robots, etc.: with the abilities of complex reasoning and decision making, Qwen2-VL can be integrated with devices like mobile phones, robots, etc., for automatic operation based on visual environment and text instructions.
Multilingual Support: to serve global users, besides English and Chinese, Qwen2-VL now supports the understanding of texts in different languages inside images, including most European languages, Japanese, Korean, Arabic, Vietnamese, etc.
The Qwen2-Audio is the new model series of large audio-language models from the Qwen team. Qwen2-Audio is capable of accepting various audio signal inputs and performing audio analysis or direct textual responses with regard to speech instructions.
They introduce two distinct audio interaction modes:
voice chat: users can freely engage in voice interactions with Qwen2-Audio without text input
audio analysis: users could provide audio and text instructions for analysis during the interaction
OLMoE is a series of Open Language Models using sparse Mixture-of-Experts designed to enable the science of language models. The team releases all code, checkpoints, logs, and details involved in training these models.
0.44.0: New AdEMAMix optimizer, Embeddings quantization, and more!
New optimizer: AdEMAMix
The AdEMAMix optimizer is a modification to AdamW which proposes tracking two EMAs to better leverage past gradients. This allows for faster convergence with less training data and improved resistance to forgetting.
We've implemented 8bit and paged variations: AdEMAMix, AdEMAMix8bit, PagedAdEMAMix, and PagedAdEMAMix8bit. These can be used with a similar API to existing optimizers.
The block size for all 8-bit optimizers has been reduced from 2048 to 256 in this release. This is a change from the original implementation proposed in the paper which improves accuracy.
CUDA Graphs support
A fix to enable CUDA Graphs capture of kernel functions was made in #1330. This allows for performance improvements with inference frameworks like vLLM. Thanks @jeejeelee!
Quantization for Embeddings
The trend of LLMs to use larger vocabularies continues. The embeddings can take up a significant portion of a quantized model's footprint. We now have an implementation of Embedding4bit and Embedding8bit thanks to @galqiwi!
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore major version` will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself)
- `@dependabot ignore minor version` will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself)
- `@dependabot ignore ` will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore ` will remove all of the ignore conditions of the specified dependency
- `@dependabot unignore ` will remove the ignore condition of the specified dependency and ignore conditions
Bumps the python-packages group with 7 updates:
1.43.0
1.50.2
0.3.2
0.3.3
8.3.2
8.3.3
0.4.4
0.5.0
2.4.0
2.4.1
4.44.2
4.45.1
0.43.3
0.44.1
Updates
openai
from 1.43.0 to 1.50.2Release notes
Sourced from openai's releases.
... (truncated)
Changelog
Sourced from openai's changelog.
... (truncated)
Commits
aeaed48
release: 1.50.2adb6da3
fix(audio): correct types for transcriptions / translations (#1755)7a5632f
release: 1.50.173fc3b9
docs(helpers): fix chat completion anchor (#1753)37f5615
release: 1.50.0152a8bd
feat(structured outputs): add support for accessing raw responses (#1748)e1aeeb0
chore(pydantic v1): exclude specific properties when rich printing (#1751)70edb21
release: 1.49.056e64d9
feat(api): add omni-moderation model (#1750)9feadd8
chore(internal): update test snapshots (#1749)Updates
ollama
from 0.3.2 to 0.3.3Release notes
Sourced from ollama's releases.
Commits
89e8b74
Merge pull request #267 from ollama/dependabot/pip/ruff-0.6.381edab1
_stream in async client raises RuntimeError processing HTTP errors (#266)5f51129
Bump ruff from 0.6.2 to 0.6.3Updates
pytest
from 8.3.2 to 8.3.3Release notes
Sourced from pytest's releases.
Commits
d0f136f
build(deps): Bump pypa/gh-action-pypi-publish from 1.10.0 to 1.10.1 (#12790)972f307
Prepare release version 8.3.30dabdcf
Include co-authors in release announcement (#12795) (#12797)a9910a4
Do not discover properties when iterating fixtures (#12781) (#12788)0f10b6b
Fix issue with slashes being turned into backslashes on Windows (#12760) (#12...300d13d
Merge pull request #12785 from pytest-dev/patchback/backports/8.3.x/57cccf7f4...e5d32c7
Merge pull request #12784 from svenevs/fix/docs-example-parametrize-minor-typobc913d1
Streamline checks for verbose option (#12706) (#12778)01cfcc9
Fix typos and introduce codespell pre-commit hook (#12769) (#12774)4873394
doc: Remove past training (#12772) (#12773)Updates
coverage-threshold
from 0.4.4 to 0.5.0Commits
Updates
torch
from 2.4.0 to 2.4.1Release notes
Sourced from torch's releases.
Commits
ee1b680
[Doc] Fix rendering of the unicode characters (#134695)79c8867
Fix docstring for torch.signal.windows.nuttall (#134704)38b96d3
Do not use\<filesystem>
on Linux (#134494) (#134604)b84e8c6
Move module_tracker to logging for confused hierarchy (#134467) (#134501)6a79d4a
[ROCm] Prevent accidental enablement of efficient attention. (#134531)e0ddbff
[Release Only] Disable flaky failing tests in release. Pin optree. Pin sympy ...314f033
Use ephemeral runners for windows nightly builds (#134463) (#134496)9c1f78e
[CD] Use ephemeral arm64 runners for nightly and docker builds (#134473) (#13...3675fc5
Use ephemeral runners for linux nightly builds (#134367) (#134492)920c023
docker: Use miniforge, install from pip (#134497)Updates
transformers
from 4.44.2 to 4.45.1Release notes
Sourced from transformers's releases.
... (truncated)
Commits
e71a01a
manually fix PLBart tokenizer0317895
v4.45.14ea1c43
clean_up_tokenization_spaces=False if unset (#31938)289edd9
Generate:can_generate()
recursive check (#33718)c64be31
[MllamaProcessor
] Update errors and API with multiple image (#33715)2ef31de
Release: v4.45.019d58d3
Add MLLama (#33703)94f18cf
Add OmDet-Turbo (#31843)ade9e0f
Corrected max number for bf16 in transformer/docs (#33658)196d35c
Add AdEMAMix optimizer (#33682)Updates
bitsandbytes
from 0.43.3 to 0.44.1Release notes
Sourced from bitsandbytes's releases.
... (truncated)
Commits
263af70
Release 0.44.151294d9
Fix optimizer support for Python <= 3.9 (#1379)776140a
docs: update supported ROCm vers88b3409
Bump dev version9841697
Release v0.44.0bdf381c
Add workflow to publish tagged releases to PyPI (#1369)e7c6fc6
docs: add cpu benchmark (#1366)aa57bd8
Change 8bit optimizer blocksize 2048->256; additional bf16 support (#1365)d964546
Add AdEMAMix optimizer (#1360)8fc7892
Addmove_to_device
kwarg to the optimizer'sload_state_dict
(#1344)Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting
@dependabot rebase
.Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show