#12652: Resolve regression [conda]{.title-ref} environments where no longer being automatically detected.
-- by RonnyPfannschmidt{.interpreted-text role="user"}
8.3.1
pytest 8.3.1 (2024-07-20)
The 8.3.0 release failed to include the change notes and docs for the release. This patch release remedies this. There are no other changes.
8.3.0
pytest 8.3.0 (2024-07-20)
New features
#12231: Added [--xfail-tb]{.title-ref} flag, which turns on traceback output for XFAIL results.
If the [--xfail-tb]{.title-ref} flag is not given, tracebacks for XFAIL results are NOT shown.
The style of traceback for XFAIL is set with [--tb]{.title-ref}, and can be [auto|long|short|line|native|no]{.title-ref}.
Note: Even if you have [--xfail-tb]{.title-ref} set, you won't see them if [--tb=no]{.title-ref}.
Some history:
With pytest 8.0, [-rx]{.title-ref} or [-ra]{.title-ref} would not only turn on summary reports for xfail, but also report the tracebacks for xfail results. This caused issues with some projects that utilize xfail, but don't want to see all of the xfail tracebacks.
This change detaches xfail tracebacks from [-rx]{.title-ref}, and now we turn on xfail tracebacks with [--xfail-tb]{.title-ref}. With this, the default [-rx]{.title-ref}/ [-ra]{.title-ref} behavior is identical to pre-8.0 with respect to xfail tracebacks. While this is a behavior change, it brings default behavior back to pre-8.0.0 behavior, which ultimately was considered the better course of action.
#12281: Added support for keyword matching in marker expressions.
Now tests can be selected by marker keyword arguments.
Supported values are int{.interpreted-text role="class"}, (unescaped) str{.interpreted-text role="class"}, bool{.interpreted-text role="class"} & None{.interpreted-text role="data"}.
See marker examples <marker_keyword_expression_example>{.interpreted-text role="ref"} for more information.
-- by lovetheguitar{.interpreted-text role="user"}
#12567: Added --no-fold-skipped command line option.
If this option is set, then skipped tests in short summary are no longer grouped
by reason but all tests are printed individually with their nodeid in the same
way as other statuses.
PyTorch 2.4: Python 3.12, AOTInductor freezing, libuv backend for TCPStore
PyTorch 2.4 Release Notes
Highlights
Tracked Regressions
Backward incompatible changes
Deprecations
New features
Improvements
Bug Fixes
Performance
Documentation
Developers
Security
Highlights
We are excited to announce the release of PyTorch® 2.4!
PyTorch 2.4 adds support for the latest version of Python (3.12) for torch.compile.
AOTInductor freezing gives developers running AOTInductor more performance based optimizations by allowing the
serialization of MKLDNN weights. As well, a new default TCPStore server backend utilizing libuv has been introduced
which should significantly reduce initialization times for users running large-scale jobs.
Finally, a new Python Custom Operator API makes it easier than before to integrate custom kernels
into PyTorch, especially for torch.compile.
This release is composed of 3661 commits and 475 contributors since PyTorch 2.3. We want to sincerely thank our
dedicated community for your contributions. As always, we encourage you to try these out and report any issues as we
improve 2.4. More information about how to get started with the PyTorch 2-series can be found at our
Getting Started page.
... (truncated)
Commits
d990dad [CMAKE] Look for Development.Module instead of Development (#129729)
e4ee3be [Release only] use triton 3.0.x from pypi (#130336)
9afe4ec Update torchbench model expected accuracy values after pinning numpy (#129986)
[whisper] fix short-form output type #32178, by @sanchit-gandhi which fixes the short audio temperature fallback!
[BigBird Pegasus] set _supports_param_buffer_assignment to False #32222 by @kashif, mostly related to the new super fast init, some models have to get this set to False. If you see a weird behavior look for that 😉
0.43.3: enabling LLama 405b with 8xH/A100 + 256GB RAM
Improvements:
FSDP: Enable loading prequantized weights with bf16/fp16/fp32 quant_storage
Background: This update, linked to [Transformer PR #32276](huggingface/transformers#32276), allows loading prequantized weights with alternative storage formats. Metadata is tracked similarly to Params4bit.__new__ post PR #970. It supports models exported with non-default quant_storage, such as this NF4 model with BF16 storage.
Special thanks to @winglian and @matthewdouglas for enabling FSDP+QLoRA finetuning of Llama 3.1 405B on a single 8xH100 or 8xA100 node with as little as 256GB system RAM.
FSDP: Enable loading prequantized weights with bf16/fp16/fp32 quant_storage
Background: This update, linked to [Transformer PR #32276](huggingface/transformers#32276), allows loading prequantized weights with alternative storage formats. Metadata is tracked similarly to Params4bit.__new__ post PR #970. It supports models exported with non-default quant_storage, such as this NF4 model with BF16 storage.
Special thanks to @winglian and @matthewdouglas for enabling FSDP+QLoRA finetuning of Llama 3.1 405B on a single 8xH100 or 8xA100 node with as little as 256GB system RAM.
0.43.2
This release is quite significant as the QLoRA bug fix big implications for higher seqlen and batch sizes.
For each sequence (i.e. batch size increase of one) we expect memory savings of:
405B: 39GB for seqlen=1024, and 4888GB for seqlen=128,00
70B: 10.1GB for seqlen=1024 and 1258GB for seqlen=128,00
This was due to activations being unnecessary for frozen parameters, yet the memory for them was still erroneously allocated due to the now fixed bug.
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore major version` will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself)
- `@dependabot ignore minor version` will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself)
- `@dependabot ignore ` will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore ` will remove all of the ignore conditions of the specified dependency
- `@dependabot unignore ` will remove the ignore condition of the specified dependency and ignore conditions
Bumps the python-packages group with 6 updates:
1.35.10
1.37.1
0.2.1
0.3.1
8.2.2
8.3.2
2.3.1
2.4.0
4.42.3
4.43.3
0.43.1
0.43.3
Updates
openai
from 1.35.10 to 1.37.1Release notes
Sourced from openai's releases.
... (truncated)
Changelog
Sourced from openai's changelog.
... (truncated)
Commits
195c05a
release: 1.37.1edf0fb0
chore(tests): update prism version (#1572)1ed0e35
release: 1.37.0ff5add0
docs(readme): fix example snippet imports (#1569)24ab406
feat(api): add uploads endpoints (#1568)c814354
fix(cli/audio): handle non-json response format (#1557)af8f606
release: 1.36.11a9124b
fix(types): add gpt-4o-mini to more assistants methods4c6a5ed
release: 1.36.038fe1a6
feat(api): add new gpt-4o-mini models (#1561)Updates
ollama
from 0.2.1 to 0.3.1Release notes
Sourced from ollama's releases.
Commits
8b694bb
UpdateMessage
annotations to support tool calls (#227)cbf0887
fix: update name of toolcallfunction parameter (#236)cd4fbfc
Merge pull request #224 from ollama/dependabot/pip/pytest-httpserver-1.0.12d2da64f
Merge pull request #230 from ollama/dependabot/pip/ruff-0.5.5ae2bdbf
Merge pull request #231 from ollama/dependabot/pip/pytest-8.3.260d292a
update tollama3.1
(#237)ab7e600
Bump pytest from 8.2.2 to 8.3.28e3a8fe
Bump ruff from 0.5.2 to 0.5.56025c36
Bump pytest-httpserver from 1.0.10 to 1.0.12f62eb97
Merge pull request #217 from ollama/dependabot/pip/pytest-asyncio-0.23.8Updates
pytest
from 8.2.2 to 8.3.2Release notes
Sourced from pytest's releases.
... (truncated)
Commits
bbcec9c
Prepare release version 8.3.278fe8b6
Merge pull request #12657 from pytest-dev/patchback/backports/8.3.x/6c806b499...238bad2
Merge pull request #12656 from RonnyPfannschmidt/fix-12652-detect-conda-envae6034a
Merge pull request #12641 from pytest-dev/patchback/backports/8.3.x/c03989cee...31337ab
Merge pull request #12640 from pytest-dev/update-userca3070b
Merge pull request #12637 from pytest-dev/release-8.3.1de98446
Prepare release version 8.3.1bd0a042
Merge pull request #12636 from pytest-dev/update-release-notes664325b
doc/changelog: update 8.3.0 notes19d225d
Merge pull request #12635 from pytest-dev/release-8.3.0Updates
torch
from 2.3.1 to 2.4.0Release notes
Sourced from torch's releases.
... (truncated)
Commits
d990dad
[CMAKE] Look forDevelopment.Module
instead ofDevelopment
(#129729)e4ee3be
[Release only] use triton 3.0.x from pypi (#130336)9afe4ec
Update torchbench model expected accuracy values after pinning numpy (#129986)499621e
[CherryPick][FSDP2+TP] Disable 2D state_dict (#129519) (#129923)e5bda62
[CherryPick][DCP] Fix Optimizer Learning Rate not being loaded correctly (#12...705e3ae
Improve error message for weights_only load (#129783)b26cde4
[Windows] remove mkl shared library dependency. (#129740)12ad767
[distributed] NCCL result code update (#129704)1164d3c
Add threadfence to 2-stage reduction for correct writes visibility (#129701)9533637
Inductor to fail gracefully on Voltas for bf16 tensors (#129699)Updates
transformers
from 4.42.3 to 4.43.3Release notes
Sourced from transformers's releases.
Commits
47c29cc
Patch release v4.43.354bc29c
don't log base model architecture in wandb if log model is false (#32143)cc75146
[BigBird Pegasus] set _supports_param_buffer_assignment to False (#32222)cd06184
[whisper] fix short-form output type (#32178)38d94bf
Patch releaseb4a0442
Fix float8_e4m3fn in modeling_utils (#32193)4672b4d
Fix resize embedding with Deepspeed (#32192)a2b6a00
let's not warn when someone is running a forward (#32176)64a90d7
RoPE: relaxed rope validation (#32182)782bfff
Patch releaseUpdates
bitsandbytes
from 0.43.1 to 0.43.3Release notes
Sourced from bitsandbytes's releases.
Changelog
Sourced from bitsandbytes's changelog.
Commits
2e03d34
release 0.43.3b64cbe3
add job to upload wheels to continuous pre-release (#1282)3a6911f
Merge pull request #1295 from matthewdouglas/fsdp-load-prequantizeda96d2f0
Merge branch 'main' into fsdp-load-prequantized5cc91b1
Bump pytest from 8.3.1 to 8.3.2 in the minor-patch group (#1296)87f88af
Enable loading prequantized weights with bf16/fp16/fp32 quant_storage type fo...2621e1a
docs: cleanup compilation instrs for multi-backend7800734
Changelog: add explanation r. QLoRA mem savingsa7c08af
bump version tag to next devce53caf
release 0.43.2Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting
@dependabot rebase
.Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show