Changelog
### 1.2
```
class MyModule(torch.nn.Module):
...
Construct an nn.Module instance
module = MyModule(args)
Pass it to `torch.jit.script` to compile it into a ScriptModule.
my_torchscript_module = torch.jit.script(module)
`torch.jit.script()` will attempt to recursively compile the given `nn.Module`, including any submodules or methods called from `forward()`. See the [migration guide](https://pytorch.org/docs/master/jit.htmlmigrating-to-pytorch-1-2-recursive-scripting-api) for more info on what's changed and how to migrate.
[JIT] Improved TorchScript Python language coverage
In 1.2, TorchScript has significantly improved its support for Python language constructs and Python's standard library. Highlights include:
* Early returns, breaks and continues.
* Iterator-based constructs, like `for..in` loops, `zip()`, and `enumerate()`.
* `NamedTuples`.
* `math` and `string` library support.
* Support for most Python builtin functions.
See the detailed notes below for more information.
Expanded Onnx Export
In PyTorch 1.2, working with Microsoft, we’ve added full support to export ONNX Opset versions 7(v1.2), 8(v1.3), 9(v1.4) and 10 (v1.5). We’ve have also enhanced the constant folding pass to support Opset 10, the latest available version of ONNX. Additionally, users now are able to register their own symbolic to export custom ops, and specify the dynamic dimensions of inputs during export. Here is a summary of the all of the major improvements:
* Support for multiple Opsets including the ability to export dropout, slice, flip and interpolate in Opset 10.
* Improvements to ScriptModule including support for multiple outputs, tensor factories and tuples as inputs and outputs.
* More than a dozen additional PyTorch operators supported including the ability to export a custom operator.
Updated docs can be found [here](https://pytorch.org/docs/stable/onnx.html) and also a refreshed tutorial using ONNXRuntime can be found [here](https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html).
Tensorboard is no Longer Considered Experimental
Read the [documentation](https://pytorch.org/docs/stable/tensorboard.html) or simply type **`from`**` torch.utils.tensorboard `**`import`**` SummaryWriter` to get started!
NN.Transformer
We include a standard [nn.Transformer](https://pytorch.org/docs/stable/nn.html?highlight=transformertorch.nn.Transformer) module, based on the paper “[_Attention is All You Need_](https://arxiv.org/abs/1706.03762)”. The `nn.Transformer` module relies entirely on an [attention mechanism](https://pytorch.org/docs/stable/nn.html?highlight=nn%20multiheadattentiontorch.nn.MultiheadAttention) to draw global dependencies between input and output. The individual components of the `nn.Transformer` module are designed so they can be adopted independently. For example, the [nn.TransformerEncoder](https://pytorch.org/docs/stable/nn.html?highlight=nn%20transformerencodertorch.nn.TransformerEncoder) can be used by itself, without the larger `nn.Transformer`. New APIs include:
* `nn.Transformer`
* `nn.TransformerEncoder` and `nn.TransformerEncoderLayer`
* `nn.TransformerDecoder` and `nn.TransformerDecoderLayer`
See the [Transformer Layers](https://pytorch.org/docs/stable/nn.htmltransformer-layers) documentation for more info.
Breaking Changes
Comparison operations (`lt (<), le (<=), gt (>), ge (>=), eq (==), ne, (!=)` ) return dtype has changed from `torch.uint8` to `torch.bool` ([21113](https://github.com/pytorch/pytorch/pull/21113))
*Version 1.1:*
>>> torch.tensor([1, 2, 3]) < torch.tensor([3, 1, 2])
tensor([1, 0, 0], dtype=torch.uint8)
*Version 1.2:*
>>> torch.tensor([1, 2, 3]) < torch.tensor([3, 1, 2])
tensor([True, False, False])
For most programs, we don't expect that any changes will need to be made as a result of this change. There are a couple of possible exceptions listed below.
**Mask Inversion**
In prior versions of PyTorch, the idiomatic way to invert a mask was to call `1 - mask`. This behavior is no longer supported; use the `~` or `bitwise_not()` operator instead.
*Version 1.1*:
>>> 1 - (torch.tensor([1, 2, 3]) < torch.tensor([3, 1, 2]))
tensor([0, 1, 1], dtype=torch.uint8)
*Version 1.2:*
>>> 1 - (torch.tensor([1, 2, 3]) < torch.tensor([3, 1, 2]))
RuntimeError: Subtraction, the `-` operator, with a bool tensor is not supported.
If you are trying to invert a mask, use the `~` or `bitwise_not()` operator instead.
>>> ~(torch.tensor([1, 2, 3]) < torch.tensor([3, 1, 2]))
tensor([False, True, True])
**sum(Tensor) (python built-in) does not upcast `dtype` like `torch.sum`**
Python's built-in `sum` returns results in the same `dtype` as the tensor itself, so it will not return the expected result if the value of the sum cannot be represented in the `dtype` of the tensor.
*Version 1.1*:
value can be represented in result dtype
>>> sum(torch.tensor([1, 2, 3, 4, 5]) > 2)
tensor(3, dtype=torch.uint8)
value can NOT be represented in result dtype
>>> sum(torch.ones((300,)) > 0)
tensor(44, dtype=torch.uint8)
torch.sum properly upcasts result dtype
>>> torch.sum(torch.ones((300,)) > 0)
tensor(300)
*Version 1.2:*
value cannot be represented in result dtype (now torch.bool)
>>> sum(torch.tensor([1, 2, 3, 4, 5]) > 2)
tensor(True)
value cannot be represented in result dtype
>>> sum(torch.ones((300,)) > 0)
tensor(True)
torch.sum properly upcasts result dtype
>>> torch.sum(torch.ones((300,)) > 0)
tensor(300)
**TLDR**: use `torch.sum` instead of the built-in `sum`. Note that the built-in `sum()` behavior will more closely resemble `torch.sum` in the next release.
Note also that masking via `torch.uint8` Tensors is now deprecated, see the **Deprecations** section for more information.
`__invert__` / `~`: now calls `torch.bitwise_not` instead of `1 - tensor` and is supported for all integral+Boolean dtypes instead of only `torch.uint8`. ([22326](https://github.com/pytorch/pytorch/pull/22326))
*Version 1.1*:
>>> ~torch.arange(8, dtype=torch.uint8)
tensor([ 1, 0, 255, 254, 253, 252, 251, 250], dtype=torch.uint8)
*Version 1.2*:
>>> ~torch.arange(8, dtype=torch.uint8)
tensor([255, 254, 253, 252, 251, 250, 249, 248], dtype=torch.uint8)
`torch.tensor(bool)` and `torch.as_tensor(bool)` now infer `torch.bool` dtype instead of `torch.uint8`. ([19097](https://github.com/pytorch/pytorch/pull/19097))
*Version 1.1:*
>>> torch.tensor([True, False])
tensor([1, 0], dtype=torch.uint8)
*Version 1.2:*
>>> torch.tensor([True, False])
tensor([ True, False])
`nn.BatchNorm{1,2,3}D`: gamma (`weight`) is now initialized to all 1s rather than randomly initialized from *U(0, 1)*. ([13774](https://github.com/pytorch/pytorch/pull/13774))
*Version 1.1:*
>>> torch.nn.BatchNorm2d(5).weight
Parameter containing:
```
### 1.2.0
```
We have just released PyTorch v1.2.0.
It has over 1,900 commits and contains a significant amount of effort in areas spanning JIT, ONNX, Distributed, as well as Performance and Eager Frontend Improvements.
Highlights
[JIT] New TorchScript API
```
Links
- PyPI: https://pypi.org/project/torch
- Changelog: https://pyup.io/changelogs/torch/
- Repo: https://github.com/pytorch/pytorch/tags
- Homepage: https://pytorch.org/
This PR updates torch from 1.1.0.post2 to 1.2.0.
Changelog
### 1.2 ``` class MyModule(torch.nn.Module): ... Construct an nn.Module instance module = MyModule(args) Pass it to `torch.jit.script` to compile it into a ScriptModule. my_torchscript_module = torch.jit.script(module) `torch.jit.script()` will attempt to recursively compile the given `nn.Module`, including any submodules or methods called from `forward()`. See the [migration guide](https://pytorch.org/docs/master/jit.htmlmigrating-to-pytorch-1-2-recursive-scripting-api) for more info on what's changed and how to migrate. [JIT] Improved TorchScript Python language coverage In 1.2, TorchScript has significantly improved its support for Python language constructs and Python's standard library. Highlights include: * Early returns, breaks and continues. * Iterator-based constructs, like `for..in` loops, `zip()`, and `enumerate()`. * `NamedTuples`. * `math` and `string` library support. * Support for most Python builtin functions. See the detailed notes below for more information. Expanded Onnx Export In PyTorch 1.2, working with Microsoft, we’ve added full support to export ONNX Opset versions 7(v1.2), 8(v1.3), 9(v1.4) and 10 (v1.5). We’ve have also enhanced the constant folding pass to support Opset 10, the latest available version of ONNX. Additionally, users now are able to register their own symbolic to export custom ops, and specify the dynamic dimensions of inputs during export. Here is a summary of the all of the major improvements: * Support for multiple Opsets including the ability to export dropout, slice, flip and interpolate in Opset 10. * Improvements to ScriptModule including support for multiple outputs, tensor factories and tuples as inputs and outputs. * More than a dozen additional PyTorch operators supported including the ability to export a custom operator. Updated docs can be found [here](https://pytorch.org/docs/stable/onnx.html) and also a refreshed tutorial using ONNXRuntime can be found [here](https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html). Tensorboard is no Longer Considered Experimental Read the [documentation](https://pytorch.org/docs/stable/tensorboard.html) or simply type **`from`**` torch.utils.tensorboard `**`import`**` SummaryWriter` to get started! NN.Transformer We include a standard [nn.Transformer](https://pytorch.org/docs/stable/nn.html?highlight=transformertorch.nn.Transformer) module, based on the paper “[_Attention is All You Need_](https://arxiv.org/abs/1706.03762)”. The `nn.Transformer` module relies entirely on an [attention mechanism](https://pytorch.org/docs/stable/nn.html?highlight=nn%20multiheadattentiontorch.nn.MultiheadAttention) to draw global dependencies between input and output. The individual components of the `nn.Transformer` module are designed so they can be adopted independently. For example, the [nn.TransformerEncoder](https://pytorch.org/docs/stable/nn.html?highlight=nn%20transformerencodertorch.nn.TransformerEncoder) can be used by itself, without the larger `nn.Transformer`. New APIs include: * `nn.Transformer` * `nn.TransformerEncoder` and `nn.TransformerEncoderLayer` * `nn.TransformerDecoder` and `nn.TransformerDecoderLayer` See the [Transformer Layers](https://pytorch.org/docs/stable/nn.htmltransformer-layers) documentation for more info. Breaking Changes Comparison operations (`lt (<), le (<=), gt (>), ge (>=), eq (==), ne, (!=)` ) return dtype has changed from `torch.uint8` to `torch.bool` ([21113](https://github.com/pytorch/pytorch/pull/21113)) *Version 1.1:* >>> torch.tensor([1, 2, 3]) < torch.tensor([3, 1, 2]) tensor([1, 0, 0], dtype=torch.uint8) *Version 1.2:* >>> torch.tensor([1, 2, 3]) < torch.tensor([3, 1, 2]) tensor([True, False, False]) For most programs, we don't expect that any changes will need to be made as a result of this change. There are a couple of possible exceptions listed below. **Mask Inversion** In prior versions of PyTorch, the idiomatic way to invert a mask was to call `1 - mask`. This behavior is no longer supported; use the `~` or `bitwise_not()` operator instead. *Version 1.1*: >>> 1 - (torch.tensor([1, 2, 3]) < torch.tensor([3, 1, 2])) tensor([0, 1, 1], dtype=torch.uint8) *Version 1.2:* >>> 1 - (torch.tensor([1, 2, 3]) < torch.tensor([3, 1, 2])) RuntimeError: Subtraction, the `-` operator, with a bool tensor is not supported. If you are trying to invert a mask, use the `~` or `bitwise_not()` operator instead. >>> ~(torch.tensor([1, 2, 3]) < torch.tensor([3, 1, 2])) tensor([False, True, True]) **sum(Tensor) (python built-in) does not upcast `dtype` like `torch.sum`** Python's built-in `sum` returns results in the same `dtype` as the tensor itself, so it will not return the expected result if the value of the sum cannot be represented in the `dtype` of the tensor. *Version 1.1*: value can be represented in result dtype >>> sum(torch.tensor([1, 2, 3, 4, 5]) > 2) tensor(3, dtype=torch.uint8) value can NOT be represented in result dtype >>> sum(torch.ones((300,)) > 0) tensor(44, dtype=torch.uint8) torch.sum properly upcasts result dtype >>> torch.sum(torch.ones((300,)) > 0) tensor(300) *Version 1.2:* value cannot be represented in result dtype (now torch.bool) >>> sum(torch.tensor([1, 2, 3, 4, 5]) > 2) tensor(True) value cannot be represented in result dtype >>> sum(torch.ones((300,)) > 0) tensor(True) torch.sum properly upcasts result dtype >>> torch.sum(torch.ones((300,)) > 0) tensor(300) **TLDR**: use `torch.sum` instead of the built-in `sum`. Note that the built-in `sum()` behavior will more closely resemble `torch.sum` in the next release. Note also that masking via `torch.uint8` Tensors is now deprecated, see the **Deprecations** section for more information. `__invert__` / `~`: now calls `torch.bitwise_not` instead of `1 - tensor` and is supported for all integral+Boolean dtypes instead of only `torch.uint8`. ([22326](https://github.com/pytorch/pytorch/pull/22326)) *Version 1.1*: >>> ~torch.arange(8, dtype=torch.uint8) tensor([ 1, 0, 255, 254, 253, 252, 251, 250], dtype=torch.uint8) *Version 1.2*: >>> ~torch.arange(8, dtype=torch.uint8) tensor([255, 254, 253, 252, 251, 250, 249, 248], dtype=torch.uint8) `torch.tensor(bool)` and `torch.as_tensor(bool)` now infer `torch.bool` dtype instead of `torch.uint8`. ([19097](https://github.com/pytorch/pytorch/pull/19097)) *Version 1.1:* >>> torch.tensor([True, False]) tensor([1, 0], dtype=torch.uint8) *Version 1.2:* >>> torch.tensor([True, False]) tensor([ True, False]) `nn.BatchNorm{1,2,3}D`: gamma (`weight`) is now initialized to all 1s rather than randomly initialized from *U(0, 1)*. ([13774](https://github.com/pytorch/pytorch/pull/13774)) *Version 1.1:* >>> torch.nn.BatchNorm2d(5).weight Parameter containing: ``` ### 1.2.0 ``` We have just released PyTorch v1.2.0. It has over 1,900 commits and contains a significant amount of effort in areas spanning JIT, ONNX, Distributed, as well as Performance and Eager Frontend Improvements. Highlights [JIT] New TorchScript API ```Links
- PyPI: https://pypi.org/project/torch - Changelog: https://pyup.io/changelogs/torch/ - Repo: https://github.com/pytorch/pytorch/tags - Homepage: https://pytorch.org/