Potential strict aliasing rule violation in bitwise_binary_op (on ARM/NEON) #66119
torch.get_autocast_cpu_dtype() returns a new dtype #65786
Conv2d grad bias gets wrong value for bfloat16 case #68048
The release tracker should contain all relevant pull requests related to this release as well as links to related issues
PyTorch 1.10 Release, including CUDA Graphs APIs, Frontend and compiler improvements
1.10.0 Release Notes
Highlights
Backwards Incompatible Change
New Features
Improvements
Performance
Documentation
Highlights
We are excited to announce the release of PyTorch 1.10. This release is composed of over 3,400 commits since 1.9, made by 426 contributors. We want to sincerely thank our community for continuously improving PyTorch.
PyTorch 1.10 updates are focused on improving training and performance of PyTorch, and developer usability. Highlights include:
CUDA Graphs APIs are integrated to reduce CPU overheads for CUDA workloads.
Several frontend APIs such as FX, torch.special, and nn.Module Parametrization, have moved from beta to stable.
Support for automatic fusion in JIT Compiler expands to CPUs in addition to GPUs.
Android NNAPI support is now available in beta.
You can check the blogpost that shows the new features here.
Backwards Incompatible changes
Python API
torch.any/torch.all behavior changed slightly to be more consistent for zero-dimension, uint8 tensors. (#64642)
These two functions match the behavior of NumPy, returning an output dtype of bool for all support dtypes, except for uint8 (in which case they return a 1 or a 0, but with uint8 dtype). In some cases with 0-dim tensor inputs, the returned uint8 value could mistakenly take on a value > 1. This has now been fixed.
... (truncated)
Commits
302ee7b [release/1.10] Fix adaptive_max_pool2d for channels-last on CUDA (#67697) (#6...
0c91a70 [release/1.10] TST Adds test for non-contiguous tensors (#64954) (#69617)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Bumps torch from 1.9.0 to 1.10.1.
Release notes
Sourced from torch's releases.
... (truncated)
Commits
302ee7b
[release/1.10] Fix adaptive_max_pool2d for channels-last on CUDA (#67697) (#6...0c91a70
[release/1.10] TST Adds test for non-contiguous tensors (#64954) (#69617)eadb038
[ONNX] Update onnxruntime to 1.9 for CI (#65029) (#67269) (#69641)8416d63
Fix strict aliasing rule violation in bitwise_binary_op (#66194) (#69619)c78cead
[LiteInterpreter] SpecifyLoader
toyaml.load
(#67694) (#69642)70af72c
Fix python version in test tools CI job (#66947) (#69643)36449ea
(torch/elastic) add fqdn hostname to error printout (#66182) (#66662)b544cbd
Handle shared memory cases in MathBitFallback (#66667)ddf3092
Disable .numpy() and .tolist() for tensor subclasses subclasses and f… (#66642)cc360fa
Delete extraneous whitespacesDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting
@dependabot rebase
.Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)