Changelog
### 2.9.1
```
Add an upper bound for `protobuf` in `setup.py` since `protobuf` after version 3.20 is currently incompatible with TensorFlow. See https://github.com/tensorflow/tensorflow/issues/53234, https://github.com/protocolbuffers/protobuf/issues/9954 and https://github.com/tensorflow/tensorflow/issues/56077.
```
### 2.9.0
```
Breaking Changes
* Due to security issues in TF 2.8, all boosted trees code has now been removed (after being deprecated in TF 2.8). Users should switch to [TensorFlow Decision Forests](https://github.com/tensorflow/decision-forests).
* Build, Compilation and Packaging
* TensorFlow is now compiled with `_GLIBCXX_USE_CXX11_ABI=1`. Downstream projects that encounter `std::__cxx11` or `[abi:cxx11]` linker errors will need to adopt this compiler option. See [the GNU C++ Library docs on Dual ABI](https://gcc.gnu.org/onlinedocs/libstdc++/manual/using_dual_abi.html).
* TensorFlow Python wheels now specifically conform to [manylinux2014](https://peps.python.org/pep-0599/), an upgrade from manylinux2010. The minimum Pip version supporting manylinux2014 is Pip 19.3 (see [pypa/manylinux](https://github.com/pypa/manylinux). This change may affect you if you have been using TensorFlow on a very old platform equivalent to CentOS 6, as manylinux2014 targets CentOS 7 as a compatibility base. Note that TensorFlow does not officially support either platform.
* Discussion for these changes can be found on SIG Build's [TensorFlow Community Forum thread](https://discuss.tensorflow.org/t/tensorflow-linux-wheels-are-being-upgraded-to-manylinux2014/8339)
* The `tf.keras.mixed_precision.experimental` API has been removed. The non-experimental symbols under `tf.keras.mixed_precision` have been available since TensorFlow 2.4 and should be used instead.
* The non-experimental API has some minor differences from the experimental API. In most cases, you only need to make three minor changes:
* Remove the word "experimental" from `tf.keras.mixed_precision` symbols. E.g., replace `tf.keras.mixed_precision.experimental.global_policy` with `tf.keras.mixed_precision.global_policy`.
* Replace `tf.keras.mixed_precision.experimental.set_policy` with `tf.keras.mixed_precision.set_global_policy`. The experimental symbol `set_policy` was renamed to `set_global_policy` in the non-experimental API.
* Replace `LossScaleOptimizer(opt, "dynamic")` with `LossScaleOptimizer(opt)`. If you pass anything other than `"dynamic"` to the second argument, see (1) of the next section.
* In the following rare cases, you need to make more changes when switching to the non-experimental API:
* If you passed anything other than `"dynamic"` to the `loss_scale` argument (the second argument) of `LossScaleOptimizer`:
* The LossScaleOptimizer constructor takes in different arguments. See the [TF 2.7 documentation of tf.keras.mixed_precision.experimental.LossScaleOptimizer](https://www.tensorflow.org/versions/r2.7/api_docs/python/tf/keras/mixed_precision/experimental/LossScaleOptimizer) for details on the differences, which has examples on how to convert to the non-experimental LossScaleOptimizer.
* If you passed a value to the `loss_scale` argument (the second argument) of `Policy`:
* The experimental version of `Policy` optionally took in a `tf.compat.v1.mixed_precision.LossScale` in the constructor, which defaulted to a dynamic loss scale for the `"mixed_float16"` policy and no loss scale for other policies. In `Model.compile`, if the model's policy had a loss scale, the optimizer would be wrapped with a `LossScaleOptimizer`. With the non-experimental `Policy`, there is no loss scale associated with the `Policy`, and `Model.compile` wraps the optimizer with a `LossScaleOptimizer` if and only if the policy is a `"mixed_float16"` policy. If you previously passed a `LossScale` to the experimental `Policy`, consider just removing it, as the default loss scaling behavior is usually what you want. If you really want to customize the loss scaling behavior, you can wrap your optimizer with a `LossScaleOptimizer` before passing it to `Model.compile`.
* If you use the very rarely-used function `tf.keras.mixed_precision.experimental.get_layer_policy`:
* Replace `tf.keras.mixed_precision.experimental.get_layer_policy(layer)` with `layer.dtype_policy`.
* `tf.mixed_precision.experimental.LossScale` and its subclasses have been removed from the TF2 namespace. This symbols were very rarely used and were only useful in TF2 for use in the now-removed `tf.keras.mixed_precision.experimental` API. The symbols are still available under `tf.compat.v1.mixed_precision`.
* The `experimental_relax_shapes` heuristic for `tf.function` has been deprecated and replaced with `reduce_retracing` which encompasses broader heuristics to reduce the number of retraces (see below)
Major Features and Improvements
* `tf.keras`:
* Added `tf.keras.applications.resnet_rs` models. This includes the
`ResNetRS50`, `ResNetRS101`, `ResNetRS152`, `ResNetRS200`,
`ResNetRS270`, `ResNetRS350` and `ResNetRS420` model architectures. The
ResNetRS models are based on the architecture described in
[Revisiting ResNets: Improved Training and Scaling Strategies](https://arxiv.org/pdf/2103.07579.pdf)
* Added `tf.keras.optimizers.experimental.Optimizer`. The reworked
optimizer gives more control over different phases of optimizer calls,
and is easier to customize. We provide Adam, SGD, Adadelta, AdaGrad and
RMSprop optimizers based on
`tf.keras.optimizers.experimental.Optimizer`. Generally the new
optimizers work in the same way as the old ones, but support new
constructor arguments. In the future, the symbols
`tf.keras.optimizers.Optimizer`/`Adam`/etc will point to the new
optimizers, and the previous generation of optimizers will be moved to
`tf.keras.optimizers.legacy.Optimizer`/`Adam`/etc.
* Added L2 unit normalization layer `tf.keras.layers.UnitNormalization`.
* Added `tf.keras.regularizers.OrthogonalRegularizer`, a new regularizer
that encourages orthogonality between the rows (or columns) or a weight
matrix.
* Added `tf.keras.layers.RandomBrightness` layer for image preprocessing.
* Added APIs for switching between interactive logging and absl logging.
By default, Keras always writes the logs to stdout. However, this is not
optimal in a non-interactive environment, where you don't have access to
stdout, but can only view the logs. You can use
`tf.keras.utils.disable_interactive_logging()` to write the logs to ABSL
logging. You can also use `tf.keras.utils.enable_interactive_logging()`
to change it back to stdout, or
`tf.keras.utils.is_interactive_logging_enabled()` to check if
interactive logging is enabled.
* Changed default value for the `verbose` argument of `Model.evaluate()`
and `Model.predict()` to `"auto"`, which defaults to `verbose=1` for
most cases and defaults to `verbose=2` when used with
`ParameterServerStrategy` or with interactive logging disabled.
* Argument `jit_compile` in `Model.compile()` now applies to
`Model.evaluate()` and `Model.predict()`. Setting `jit_compile=True` in
`compile()` compiles the model's training, evaluation, and inference
steps to [XLA](https://www.tensorflow.org/xla). Note that
`jit_compile=True` may not necessarily work for all models.
* Added DTensor-related Keras APIs under `tf.keras.dtensor` namespace. The
APIs are still classified as experimental. You are welcome to try it
out. Please check the tutorial and guide on https://www.tensorflow.org/
for more details about DTensor.
* `tf.lite`:
* Added TFLite builtin op support for the following TF ops:
* `tf.math.argmin`/`tf.math.argmax` for input data type `tf.bool` on
CPU.
* `tf.nn.gelu` op for output data type `tf.float32` and quantization
on CPU.
* Add nominal support for unsigned 16-bit integer tensor types. Note that
very few TFLite kernels support this type natively, so its use in mobile
ML authoring is generally discouraged.
* Add support for unsigned 16-bit integer tensor types in cast op.
* Experimental support for lowering `list_ops.tensor_list_set_item` with
`DynamicUpdateSlice`.
* Enabled a new MLIR-based dynamic range quantization backend by default
* The new backend is used for post-training int8 dynamic range
quantization and post-training float16 quantization.
* Set `experimental_new_dynamic_range_quantizer` in
tf.lite.TFLiteConverter to False to disable this change
* Native TF Lite variables are now enabled during conversion by default on
all v2 TfLiteConverter entry points.
`experimental_enable_resource_variables` on tf.lite.TFLiteConverter is
now True by default and will be removed in the future.
* `tf.function`:
* Custom classes used as arguments for `tf.function` can now specify rules
regarding when retracing needs to occur by implementing the Tracing
Protocol available through
`tf.types.experimental.SupportsTracingProtocol`.
* `TypeSpec` classes (as associated with `ExtensionTypes`) also implement
the Tracing Protocol which can be overridden if necessary.
* The newly introduced `reduce_retracing` option also uses the Tracing
Protocol to proactively generate generalized traces similar to
`experimental_relax_shapes` (which has now been deprecated).
* Unified eager and `tf.function` execution:
* Eager mode can now execute each op as a `tf.function`, allowing for more
consistent feature support in future releases.
* It is available for immediate use.
* See the `TF_RUN_EAGER_OP_AS_FUNCTION` environment variable in
[eager context](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/eager/context.py).
* Eager performance should be similar with this feature enabled.
* A roughly 5us per-op overhead may be observed when running many
small functions.
* Note a
[known issue](https://github.com/tensorflow/tensorflow/issues/55414)
with GPU performance.
* The behavior of `tf.function` itself is unaffected.
* Note: This feature will be enabled by default in an upcoming version of
TensorFlow.
* `tf.experimental.dtensor`: Added DTensor, an extension to TensorFlow for
large-scale modeling with minimal changes to user code. You are welcome to
try it out, though be aware that the DTensor API is experimental and up-to
backward-incompatible changes. DTensor and Keras integration is published
under `tf.keras.dtensor` in this release (refer to the `tf.keras` entry).
The tutoral and guide for DTensor will be published on
https://www.tensorflow.org/. Please stay tuned.
* [oneDNN CPU performance optimizations](https://github.com/tensorflow/community/blob/master/rfcs/20210930-enable-onednn-ops.md)
are available in Linux x86, Windows x86, and Linux aarch64 packages.
* **Linux x86 packages:**
* oneDNN optimizations are *enabled by default* on CPUs with
neural-network-focused hardware features such as AVX512_VNNI,
AVX512_BF16, AMX, etc.
([Intel Cascade Lake](https://www.intel.com/content/www/us/en/products/platforms/details/cascade-lake.html)
and newer CPUs.)
* [Example performance speedups.](https://medium.com/intel-analytics-software/leverage-intel-deep-learning-optimizations-in-tensorflow-129faa80ee07)
* For older CPUs, oneDNN optimizations are disabled by default.
* **Windows x86 package:** oneDNN optimizations are disabled by default.
* **Linux aach64 (`--config=mkl_aarch64`) package:**
* Experimental oneDNN optimizations are disabled by default.
* If you experience issues with oneDNN optimizations on, we recommend
turning them off.
* To explicitly enable or disable oneDNN optimizations, set the
environment variable `TF_ENABLE_ONEDNN_OPTS` to `1` (enable) or `0`
(disable) before running TensorFlow. (The variable is checked during
`import tensorflow`.) To fall back to default settings, unset the
environment variable.
* These optimizations can yield slightly different numerical results from
when they are off due to floating-point round-off errors from different
computation approaches and orders.
* To verify that the optimizations are on, look for a message with
*"oneDNN custom operations are on"* in the log. If the exact phrase is
not there, it means they are off.
Bug Fixes and Other Changes
* `tf.data`:
* Fixed bug in `tf.data.experimental.parse_example_dataset` when `tf.io.RaggedFeatures` would specify `value_key` but no `partitions`. Before the fix, setting `value_key` but no `partitions` would result in the feature key being replaced by the value key, e.g. `{'value_key': <RaggedTensor>}` instead of `{'key': <RaggedTensor>}`. Now the correct feature key will be used. This aligns the behavior of `tf.data.experimental.parse_example_dataset` to match the behavior of `tf.io.parse_example`.
* Added a new field, `filter_parallelization`, to `tf.data.experimental.OptimizationOptions`. If it is set to `True`, tf.data will run `Filter` transformation with multiple threads. Its default value is `False` if not specified.
* `tf.keras`:
* Fixed bug in optimizers that prevented them from properly checkpointing slot variables when they are `ShardedVariable`s (used for training with `tf.distribute.experimental.ParameterServerStrategy`).
* `tf.random`:
* Added `tf.random.experimental.index_shuffle`, for shuffling a sequence without materializing the sequence in memory.
* `tf.RaggedTensor`:
* Introduced `tf.experimental.RowPartition`, which encodes how one dimension in a RaggedTensor relates to another, into the public API.
* Introduced `tf.experimental.DynamicRaggedShape`, which represents the shape of a RaggedTensor.
Security
* Fixes a code injection in `saved_model_cli` ([CVE-2022-29216](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29216))
* Fixes a missing validation which causes `TensorSummaryV2` to crash ([CVE-2022-29193](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29193))
* Fixes a missing validation which crashes `QuantizeAndDequantizeV4Grad` ([CVE-2022-29192](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29192))
* Fixes a missing validation which causes denial of service via `DeleteSessionTensor` ([CVE-2022-29194](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29194))
* Fixes a missing validation which causes denial of service via `GetSessionTensor` ([CVE-2022-29191](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29191))
* Fixes a missing validation which causes denial of service via `StagePeek` ([CVE-2022-29195](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29195))
* Fixes a missing validation which causes denial of service via `UnsortedSegmentJoin` ([CVE-2022-29197](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29197))
* Fixes a missing validation which causes denial of service via `LoadAndRemapMatrix` ([CVE-2022-29199](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29199))
* Fixes a missing validation which causes denial of service via `SparseTensorToCSRSparseMatrix` ([CVE-2022-29198](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29198))
* Fixes a missing validation which causes denial of service via `LSTMBlockCell` ([CVE-2022-29200](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29200))
* Fixes a missing validation which causes denial of service via `Conv3DBackpropFilterV2` ([CVE-2022-29196](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29196))
* Fixes a `CHECK` failure in depthwise ops via overflows ([CVE-2021-41197](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41197))
* Fixes issues arising from undefined behavior stemming from users supplying invalid resource handles ([CVE-2022-29207](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29207))
* Fixes a segfault due to missing support for quantized types ([CVE-2022-29205](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29205))
* Fixes a missing validation which results in undefined behavior in `SparseTensorDenseAdd` ([CVE-2022-29206](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29206))
* Fixes a missing validation which results in undefined behavior in `QuantizedConv2D` ([CVE-2022-29201](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29201))
* Fixes an integer overflow in `SpaceToBatchND` ([CVE-2022-29203](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29203))
* Fixes a segfault and OOB write due to incomplete validation in `EditDistance` ([CVE-2022-29208](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29208))
* Fixes a missing validation which causes denial of service via `Conv3DBackpropFilterV2` ([CVE-2022-29204](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29204))
* Fixes a denial of service in `tf.ragged.constant` due to lack of validation ([CVE-2022-29202](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29202))
* Fixes a segfault when `tf.histogram_fixed_width` is called with NaN values ([CVE-2022-29211](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29211))
* Fixes a core dump when loading TFLite models with quantization ([CVE-2022-29212](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29212))
* Fixes crashes stemming from incomplete validation in signal ops ([CVE-2022-29213](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29213))
* Fixes a type confusion leading to `CHECK`-failure based denial of service ([CVE-2022-29209](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29209))
* Fixes a heap buffer overflow due to incorrect hash function ([CVE-2022-29210](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29210))
* Updates `curl` to `7.83.1` to handle ([CVE-2022-22576](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-22576), ([CVE-2022-27774](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27774), ([CVE-2022-27775](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27775), ([CVE-2022-27776](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27776), ([CVE-2022-27778](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27778), ([CVE-2022-27779](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27779), ([CVE-2022-27780](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27780), ([CVE-2022-27781](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27781), ([CVE-2022-27782](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27782) and ([CVE-2022-30115](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-30115)
* Updates `zlib` to `1.2.12` after `1.2.11` was pulled due to [security issue](https://www.openwall.com/lists/oss-security/2022/03/28/1)
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
Aaron Debattista, Abel Soares Siqueira, Abhishek Varma, Andrei Ivanov, andreii, Andrew Goodbody, apeltop, Arnab Dutta, Ashiq Imran, Banikumar Maiti (Intel Aipg), Ben Greiner, Benjamin Peterson, bhack, Christopher Bate, chunduriv, Copybara-Service, DEKHTIARJonathan, Deven Desai, Duncan Riach, Eric Kunze, Everton Constantino, Faruk D, Fredrik Knutsson, gadagashwini, Gauri1 Deshpande, gtiHibGele, Guozhong Zhuang, Islem-Esi, Ivanov Viktor, Jason Furmanek, Jason Zaman, Jim, Jinzhe Zeng, John Laxson, Jonas Eschle, Jonas Eschle 'Mayou36, Jonathan Dekhtiar, Kaixi Hou, Kanvi Khanna, KaurkerDevourer, Koan-Sin Tan, kushanam, Laramie Leavitt, Li-Wen Chang, lipracer, Louis Sugy, Lu Teng, Mahmoud Abuzaina, Malcolm Slaney, Malik Shahzad Muzaffar, Marek Šuppa, Matt Conley, Michael Melesse, Milos Puzovic, mohantym, Nathan John Sircombe, Nathan Luehr, Nilesh Agarwalla, Patrice Vignola, peterjc123, Philip Turner, Rajeshwar Reddy T, Robert Kalmar, Rodrigo Formigone, Rohit Santhanam, rui, Sachin Muradi, Saduf2019, sandip, Scott Leishman, Serge Panev, Shi,Guangyong, Srinivasan Narayanamoorthy, stanley, Steven I Reeves, stevenireeves, sushreebarsa, Tamas Bela Feher, Tao He, Thomas Schmeyer, Tiago Almeida, Trevor Morris, Uday Bondhugula, Uwe L. Korn, Varghese, Jojimon, Vishnuvardhan Janapati, William Muir, William Raveane, xutianming, Yasuhiro Matsumoto, Yimei Sun, Yong Tang, Yu Feng, Yuriy Chernyshov, zhaozheng09
```
### 2.8.2
```
Add an upper bound for `protobuf` in `setup.py` since `protobuf` after version 3.20 is currently incompatible with TensorFlow. See https://github.com/tensorflow/tensorflow/issues/53234, https://github.com/protocolbuffers/protobuf/issues/9954 and https://github.com/tensorflow/tensorflow/issues/56077.
```
### 2.8.1
```
This releases introduces several vulnerability fixes:
* Fixes a code injection in `saved_model_cli` ([CVE-2022-29216](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29216))
* Fixes a missing validation which causes `TensorSummaryV2` to crash ([CVE-2022-29193](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29193))
* Fixes a missing validation which crashes `QuantizeAndDequantizeV4Grad` ([CVE-2022-29192](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29192))
* Fixes a missing validation which causes denial of service via `DeleteSessionTensor` ([CVE-2022-29194](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29194))
* Fixes a missing validation which causes denial of service via `GetSessionTensor` ([CVE-2022-29191](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29191))
* Fixes a missing validation which causes denial of service via `StagePeek` ([CVE-2022-29195](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29195))
* Fixes a missing validation which causes denial of service via `UnsortedSegmentJoin` ([CVE-2022-29197](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29197))
* Fixes a missing validation which causes denial of service via `LoadAndRemapMatrix` ([CVE-2022-29199](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29199))
* Fixes a missing validation which causes denial of service via `SparseTensorToCSRSparseMatrix` ([CVE-2022-29198](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29198))
* Fixes a missing validation which causes denial of service via `LSTMBlockCell` ([CVE-2022-29200](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29200))
* Fixes a missing validation which causes denial of service via `Conv3DBackpropFilterV2` ([CVE-2022-29196](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29196))
* Fixes a `CHECK` failure in depthwise ops via overflows ([CVE-2021-41197](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41197))
* Fixes issues arising from undefined behavior stemming from users supplying invalid resource handles ([CVE-2022-29207](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29207))
* Fixes a segfault due to missing support for quantized types ([CVE-2022-29205](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29205))
* Fixes a missing validation which results in undefined behavior in `SparseTensorDenseAdd` ([CVE-2022-29206](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29206))
* Fixes a missing validation which results in undefined behavior in `QuantizedConv2D` ([CVE-2022-29201](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29201))
* Fixes an integer overflow in `SpaceToBatchND` ([CVE-2022-29203](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29203))
* Fixes a segfault and OOB write due to incomplete validation in `EditDistance` ([CVE-2022-29208](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29208))
* Fixes a missing validation which causes denial of service via `Conv3DBackpropFilterV2` ([CVE-2022-29204](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29204))
* Fixes a denial of service in `tf.ragged.constant` due to lack of validation ([CVE-2022-29202](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29202))
* Fixes a segfault when `tf.histogram_fixed_width` is called with NaN values ([CVE-2022-29211](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29211))
* Fixes a core dump when loading TFLite models with quantization ([CVE-2022-29212](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29212))
* Fixes crashes stemming from incomplete validation in signal ops ([CVE-2022-29213](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29213))
* Fixes a type confusion leading to `CHECK`-failure based denial of service ([CVE-2022-29209](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29209))
* Fixes a heap buffer overflow due to incorrect hash function ([CVE-2022-29210](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29210))
* Updates `curl` to `7.83.1` to handle ([CVE-2022-22576](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-22576), ([CVE-2022-27774](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27774), ([CVE-2022-27775](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27775), ([CVE-2022-27776](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27776), ([CVE-2022-27778](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27778), ([CVE-2022-27779](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27779), ([CVE-2022-27780](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27780), ([CVE-2022-27781](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27781), ([CVE-2022-27782](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27782) and ([CVE-2022-30115](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-30115)
* Updates `zlib` to `1.2.12` after `1.2.11` was pulled due to [security issue](https://www.openwall.com/lists/oss-security/2022/03/28/1)
```
### 2.8.0
```
Major Features and Improvements
* `tf.lite`:
* Added TFLite builtin op support for the following TF ops:
* `tf.raw_ops.Bucketize` op on CPU.
* `tf.where` op for data types
`tf.int32`/`tf.uint32`/`tf.int8`/`tf.uint8`/`tf.int64`.
* `tf.random.normal` op for output data type `tf.float32` on CPU.
* `tf.random.uniform` op for output data type `tf.float32` on CPU.
* `tf.random.categorical` op for output data type `tf.int64` on CPU.
* `tensorflow.experimental.tensorrt`:
* `conversion_params` is now deprecated inside `TrtGraphConverterV2` in
favor of direct arguments: `max_workspace_size_bytes`, `precision_mode`,
`minimum_segment_size`, `maximum_cached_engines`, `use_calibration` and
`allow_build_at_runtime`.
* Added a new parameter called `save_gpu_specific_engines` to the
`.save()` function inside `TrtGraphConverterV2`. When `False`, the
`.save()` function won't save any TRT engines that have been built. When
`True` (default), the original behavior is preserved.
* `TrtGraphConverterV2` provides a new API called `.summary()` which
outputs a summary of the inference converted by TF-TRT. It namely shows
each `TRTEngineOp` with their input(s)' and output(s)' shape and dtype.
A detailed version of the summary is available which prints additionally
all the TensorFlow OPs included in each of the `TRTEngineOp`s.
* `tf.tpu.experimental.embedding`:
* `tf.tpu.experimental.embedding.FeatureConfig` now takes an additional
argument `output_shape` which can specify the shape of the output
activation for the feature.
* `tf.tpu.experimental.embedding.TPUEmbedding` now has the same behavior
as `tf.tpu.experimental.embedding.serving_embedding_lookup` which can
take arbitrary rank of dense and sparse tensor. For ragged tensor,
though the input tensor remains to be rank 2, the activations now can be
rank 2 or above by specifying the output shape in the feature config or
via the build method.
* Add
[`tf.config.experimental.enable_op_determinism`](https://www.tensorflow.org/api_docs/python/tf/config/experimental/enable_op_determinism),
which makes TensorFlow ops run deterministically at the cost of performance.
Replaces the `TF_DETERMINISTIC_OPS` environmental variable, which is now
deprecated. The "Bug Fixes and Other Changes" section lists more
determinism-related changes.
* (Since TF 2.7) Add
[PluggableDevice](https://blog.tensorflow.org/2021/06/pluggabledevice-device-plugins-for-TensorFlow.html)
support to
[TensorFlow Profiler](https://github.com/tensorflow/community/blob/master/rfcs/20210513-pluggable-profiler-for-tensorflow.md).
Bug Fixes and Other Changes
* `tf.data`:
* Fixed a bug where setting `options.deterministic = False` would only
modify one transformation to run non-deterministically, leaving other
transformations deterministic. The option will now apply the same across
all transformations.
* The optimization `parallel_batch` now becomes default if not disabled by
users, which will parallelize copying of batch elements.
* Added the ability for `TensorSliceDataset` to identify and handle inputs
that are files. This enables creating hermetic SavedModels when using
datasets created from files.
* `tf.lite`:
* Adds GPU Delegation support for serialization to Java API. This boosts
initialization time up to 90% when OpenCL is available.
* Deprecated `Interpreter::SetNumThreads`, in favor of
`InterpreterBuilder::SetNumThreads`.
* `tf.keras`:
* Adds `tf.compat.v1.keras.utils.get_or_create_layer` to aid migration to
TF2 by enabling tracking of nested keras models created in TF1-style,
when used with the `tf.compat.v1.keras.utils.track_tf1_style_variables`
decorator.
* Added a `tf.keras.layers.experimental.preprocessing.HashedCrossing`
layer which applies the hashing trick to the concatenation of crossed
scalar inputs. This provides a stateless way to try adding feature
crosses of integer or string data to a model.
* Removed `keras.layers.experimental.preprocessing.CategoryCrossing`.
Users should migrate to the `HashedCrossing` layer or use
`tf.sparse.cross`/`tf.ragged.cross` directly.
* Added additional `standardize` and `split` modes to `TextVectorization`:
* `standardize="lower"` will lowercase inputs.
* `standardize="string_punctuation"` will remove all punctuation.
* `split="character"` will split on every unicode character.
* Added an `output_mode` argument to the `Discretization` and `Hashing`
layers with the same semantics as other preprocessing layers. All
categorical preprocessing layers now support `output_mode`.
* All preprocessing layer output will follow the compute dtype of a
`tf.keras.mixed_precision.Policy`, unless constructed with
`output_mode="int"` in which case output will be `tf.int64`. The output
type of any preprocessing layer can be controlled individually by
passing a `dtype` argument to the layer.
* `tf.random.Generator` for keras initializers and all RNG code.
* Added 3 new APIs for enable/disable/check the usage of
`tf.random.Generator` in keras backend, which will be the new backend
for all the RNG in Keras. We plan to switch on the new code path by
default in tf 2.8, and the behavior change will likely to cause some
breakage on user side (eg if the test is checking against some golden
number). These 3 APIs will allow user to disable and switch back to
legacy behavior if they prefer. In future (eg TF 2.10), we expect to
totally remove the legacy code path (stateful random Ops), and these 3
APIs will be removed as well.
* `tf.keras.callbacks.experimental.BackupAndRestore` is now available as
`tf.keras.callbacks.BackupAndRestore`. The experimental endpoint is
deprecated and will be removed in a future release.
* `tf.keras.experimental.SidecarEvaluator` is now available as
`tf.keras.utils.SidecarEvaluator`. The experimental endpoint is
deprecated and will be removed in a future release.
* Metrics update and collection logic in default `Model.train_step()` is
now customizable via overriding `Model.compute_metrics()`.
* Losses computation logic in default `Model.train_step()` is now
customizable via overriding `Model.compute_loss()`.
* `jit_compile` added to `Model.compile()` on an opt-in basis to compile
the model's training step with [XLA](https://www.tensorflow.org/xla).
Note that `jit_compile=True` may not necessarily work for all models.
* Deterministic Op Functionality:
* Fix regression in deterministic selection of deterministic cuDNN
convolution algorithms, a regression that was introduced in v2.5. Note
that nondeterministic out-of-memory events while selecting algorithms
could still lead to nondeterminism, although this is very unlikely. This
additional, unlikely source will be eliminated in a later version.
* Add deterministic GPU implementations of:
* `tf.function(jit_compile=True)`'s that use `Scatter`.
* (since v2.7) Stateful ops used in `tf.data.Dataset`
* (since v2.7) `tf.convert_to_tensor` when fed with (sparse)
`tf.IndexedSlices` (because it uses `tf.math.unsorted_segment_sum`)
* (since v2.7) `tf.gather` backprop (because `tf.convert_to_tensor`
reduces `tf.gather`'s (sparse) `tf.IndexedSlices` gradients into its
dense `params` input)
* (since v2.7) `tf.math.segment_mean`
* (since v2.7) `tf.math.segment_prod`
* (since v2.7) `tf.math.segment_sum`
* (since v2.7) `tf.math.unsorted_segment_mean`
* (since v2.7) `tf.math.unsorted_segment_prod`
* (since v2.7) `tf.math.unsorted_segment_sum`
* (since v2.7) `tf.math.unsorted_segment_sqrt`
* (since v2.7) `tf.nn.ctc_loss` (resolved, possibly in prior release,
and confirmed with tests)
* (since v2.7)`tf.nn.sparse_softmax_crossentropy_with_logits`
* (since v2.7) Run `tf.scatter_nd` and other related scatter functions,
such as `tf.tensor_scatter_nd_update`, on CPU (with significant
performance penalty).
* Add determinism-unimplemented exception-throwing to the following ops.
When op-determinism is expected (i.e. after
`tf.config.experimental.enable_op_determinism` has been called), an
attempt to use the specified paths through the following ops on a GPU
will cause `tf.errors.UnimplementedError` (with an understandable
message), unless otherwise specified, to be thrown.
* `FakeQuantWithMinMaxVarsGradient` and
`FakeQuantWithMinMaxVarsPerChannelGradient`
* (since v2.7) `tf.compat.v1.get_seed` if the global random seed has
not yet been set (via `tf.random.set_seed`). Throws `RuntimeError`
from Python or `InvalidArgument` from C++
* (since v2.7) `tf.compat.v1.nn.fused_batch_norm` backprop to `offset`
when `is_training=False`
* (since v2.7) `tf.image.adjust_contrast` forward
* (since v2.7) `tf.image.resize` with `method=ResizeMethod.NEAREST`
backprop
* (since v2.7) `tf.linalg.svd`
* (since v2.7) `tf.math.bincount`
* (since v2.7) `tf.nn.depthwise_conv2d` backprop to `filter` when not
using cuDNN convolution
* (since v2.7) `tf.nn.dilation2d` gradient
* (since v2.7) `tf.nn.max_pool_with_argmax` gradient
* (since v2.7) `tf.raw_ops.DebugNumericSummary` and
`tf.raw_ops.DebugNumericSummaryV2`
* (since v2.7) `tf.timestamp`. Throws `FailedPrecondition`
* (since v2.7) `tf.Variable.scatter_add` (and other scatter methods,
both on ref and resource variables)
* (since v2.7) The random-number-generating ops in the `tf.random`
module when the global random seed has not yet been set (via
`tf.random.set_seed`). Throws `RuntimeError` from Python or
`InvalidArgument` from C++
* TensorFlow-oneDNN no longer supports
[explicit use of oneDNN blocked tensor format](https://github.com/tensorflow/tensorflow/pull/53288),
e.g., setting the environment variable `TF_ENABLE_MKL_NATIVE_FORMAT` will
not have any effect.
* TensorFlow has been validated on Windows Subsystem for Linux 2 (aka WSL 2)
for both GPUs and CPUs.
* Due to security issues (see section below), all boosted trees code has been
deprecated. Users should switch to
[TensorFlow Decision Forests](https://github.com/tensorflow/decision-forests).
TF's boosted trees code will be eliminated before the branch cut for TF 2.9
and will no longer be present since that release.
Security
* Fixes a floating point division by 0 when executing convolution operators
([CVE-2022-21725](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21725))
* Fixes a heap OOB read in shape inference for `ReverseSequence`
([CVE-2022-21728](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21728))
* Fixes a heap OOB access in `Dequantize`
([CVE-2022-21726](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21726))
* Fixes an integer overflow in shape inference for `Dequantize`
([CVE-2022-21727](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21727))
* Fixes a heap OOB access in `FractionalAvgPoolGrad`
([CVE-2022-21730](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21730))
* Fixes an overflow and divide by zero in `UnravelIndex`
([CVE-2022-21729](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21729))
* Fixes a type confusion in shape inference for `ConcatV2`
([CVE-2022-21731](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21731))
* Fixes an OOM in `ThreadPoolHandle`
([CVE-2022-21732](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21732))
* Fixes an OOM due to integer overflow in `StringNGrams`
([CVE-2022-21733](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21733))
* Fixes more issues caused by incomplete validation in boosted trees code
([CVE-2021-41208](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41208))
* Fixes an integer overflows in most sparse component-wise ops
([CVE-2022-23567](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23567))
* Fixes an integer overflows in `AddManySparseToTensorsMap`
([CVE-2022-23568](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23568))
* Fixes a number of `CHECK`-failures in `MapStage`
([CVE-2022-21734](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21734))
* Fixes a division by zero in `FractionalMaxPool`
([CVE-2022-21735](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21735))
* Fixes a number of `CHECK`-fails when building invalid/overflowing tensor
shapes
([CVE-2022-23569](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23569))
* Fixes an undefined behavior in `SparseTensorSliceDataset`
([CVE-2022-21736](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21736))
* Fixes an assertion failure based denial of service via faulty bin count
operations
([CVE-2022-21737](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21737))
* Fixes a reference binding to null pointer in `QuantizedMaxPool`
([CVE-2022-21739](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21739))
* Fixes an integer overflow leading to crash in `SparseCountSparseOutput`
([CVE-2022-21738](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21738))
* Fixes a heap overflow in `SparseCountSparseOutput`
([CVE-2022-21740](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21740))
* Fixes an FPE in `BiasAndClamp` in TFLite
([CVE-2022-23557](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23557))
* Fixes an FPE in depthwise convolutions in TFLite
([CVE-2022-21741](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21741))
* Fixes an integer overflow in TFLite array creation
([CVE-2022-23558](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23558))
* Fixes an integer overflow in TFLite
([CVE-2022-23559](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23559))
* Fixes a dangerous OOB write in TFLite
([CVE-2022-23561](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23561))
* Fixes a vulnerability leading to read and write outside of bounds in TFLite
([CVE-2022-23560](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23560))
* Fixes a set of vulnerabilities caused by using insecure temporary files
([CVE-2022-23563](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23563))
* Fixes an integer overflow in Range resulting in undefined behavior and OOM
([CVE-2022-23562](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23562))
* Fixes a vulnerability where missing validation causes `tf.sparse.split` to
crash when `axis` is a tuple
([CVE-2021-41206](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41206))
* Fixes a `CHECK`-fail when decoding resource handles from proto
([CVE-2022-23564](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23564))
* Fixes a `CHECK`-fail with repeated `AttrDef`
([CVE-2022-23565](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23565))
* Fixes a heap OOB write in Grappler
([CVE-2022-23566](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23566))
* Fixes a `CHECK`-fail when decoding invalid tensors from proto
([CVE-2022-23571](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23571))
* Fixes a null-dereference when specializing tensor type
([CVE-2022-23570](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23570))
* Fixes a crash when type cannot be specialized
([CVE-2022-23572](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23572))
* Fixes a heap OOB read/write in `SpecializeType`
([CVE-2022-23574](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23574))
* Fixes an unitialized variable access in `AssignOp`
([CVE-2022-23573](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23573))
* Fixes an integer overflow in `OpLevelCostEstimator::CalculateTensorSize`
([CVE-2022-23575](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23575))
* Fixes an integer overflow in `OpLevelCostEstimator::CalculateOutputSize`
([CVE-2022-23576](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23576))
* Fixes a null dereference in `GetInitOp`
([CVE-2022-23577](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23577))
* Fixes a memory leak when a graph node is invalid
([CVE-2022-23578](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23578))
* Fixes an abort caused by allocating a vector that is too large
([CVE-2022-23580](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23580))
* Fixes multiple `CHECK`-failures during Grappler's `IsSimplifiableReshape`
([CVE-2022-23581](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23581))
* Fixes multiple `CHECK`-failures during Grappler's `SafeToRemoveIdentity`
([CVE-2022-23579](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23579))
* Fixes multiple `CHECK`-failures in `TensorByteSize`
([CVE-2022-23582](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23582))
* Fixes multiple `CHECK`-failures in binary ops due to type confusion
([CVE-2022-23583](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23583))
* Fixes a use after free in `DecodePng` kernel
([CVE-2022-23584](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23584))
* Fixes a memory leak in decoding PNG images
([CVE-2022-23585](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23585))
* Fixes multiple `CHECK`-fails in `function.cc`
([CVE-2022-23586](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23586))
* Fixes multiple `CHECK`-fails due to attempting to build a reference tensor
([CVE-2022-23588](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23588))
* Fixes an integer overflow in Grappler cost estimation of crop and resize
operation
([CVE-2022-23587](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23587))
* Fixes a null pointer dereference in Grappler's `IsConstant`
([CVE-2022-23589](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23589))
* Fixes a `CHECK` failure in constant folding
([CVE-2021-41197](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41197))
* Fixes a stack overflow due to self-recursive function in `GraphDef`
([CVE-2022-23591](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23591))
* Fixes a heap OOB access in `RunForwardTypeInference`
([CVE-2022-23592](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23592))
* Fixes a crash due to erroneous `StatusOr`
([CVE-2022-23590](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23590))
* Fixes multiple crashes and heap OOB accesses in TFG dialect (MLIR)
([CVE-2022-23594](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23594))
* Fixes a segfault in `simplifyBroadcast` (MLIR)
([CVE-2022-23593](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23593))
* Fixes a null pointer dereference in `BuildXlaCompilationCache` (XLA)
([CVE-2022-23595](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23595))
* Updates `icu` to `69.1` to handle
[CVE-2020-10531](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-10531)
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
8bitmp3, Adam Lanicek, ag.ramesh, alesapin, Andrew Goodbody, annasuheyla, Ariel
Elkin, Arnab Dutta, Ben Barsdell, bhack, cfRod, Chengji Yao, Christopher Bate,
dan, Dan F-M, David Korczynski, DEKHTIARJonathan, dengzhiyuan, Deven Desai,
Duncan Riach, Eli Osherovich, Ewout Ter Hoeven, ez2take, Faijul Amin, fo40225,
Frederic Bastien, gadagashwini, Gauri1 Deshpande, Georgiy Manuilov, Guilherme De
Lázari, Guozhong Zhuang, H1Gdev, homuler, Hongxu Jia, Jacky_Yin, jayfurmanek,
jgehw, Jhalak Patel, Jinzhe Zeng, Johan Gunnarsson, Jonathan Dekhtiar, Kaixi
Hou, Kanvi Khanna, Kevin Cheng, Koan-Sin Tan, Kruglov-Dmitry, Kun Lu, Lemo,
Lequn Chen, long.chen, Louis Sugy, Mahmoud Abuzaina, Mao, Marius Brehler, Mark
Harfouche, Martin Patz, Maxiwell S. Garcia, Meenakshi Venkataraman, Michael
Melesse, Mrinal Tyagi, Måns Nilsson, Nathan John Sircombe, Nathan Luehr, Nilesh
Agarwalla, Oktay Ozturk, Patrice Vignola, Pawel-Polyai, Rama Ketineni, Ramesh
Sampath, Reza Rahimi, Rob Suderman, Robert Kalmar, Rohit Santhanam, Sachin
Muradi, Saduf2019, Samuel Marks, Shi,Guangyong, Sidong-Wei, Srinivasan
Narayanamoorthy, Srishti Srivastava, Steven I Reeves, stevenireeves, Supernovae,
Tamas Bela Feher, Tao Xu, Thibaut Goetghebuer-Planchon, Thomas Schmeyer,
tilakrayal, Valery Mironov, Victor Guo, Vignesh Kothapalli, Vishnuvardhan
Janapati, wamuir, Wang,Quintin, William Muir, William Raveane, Yash Goel, Yimei
Sun, Yong Tang, Yuduo Wu
```
### 2.7.3
```
Add an upper bound for `protobuf` in `setup.py` since `protobuf` after version 3.20 is currently incompatible with TensorFlow. See https://github.com/tensorflow/tensorflow/issues/53234, https://github.com/protocolbuffers/protobuf/issues/9954 and https://github.com/tensorflow/tensorflow/issues/56077.
```
### 2.7.2
```
This releases introduces several vulnerability fixes:
* Fixes a code injection in `saved_model_cli` ([CVE-2022-29216](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29216))
* Fixes a missing validation which causes `TensorSummaryV2` to crash ([CVE-2022-29193](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29193))
* Fixes a missing validation which crashes `QuantizeAndDequantizeV4Grad` ([CVE-2022-29192](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29192))
* Fixes a missing validation which causes denial of service via `DeleteSessionTensor` ([CVE-2022-29194](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29194))
* Fixes a missing validation which causes denial of service via `GetSessionTensor` ([CVE-2022-29191](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29191))
* Fixes a missing validation which causes denial of service via `StagePeek` ([CVE-2022-29195](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29195))
* Fixes a missing validation which causes denial of service via `UnsortedSegmentJoin` ([CVE-2022-29197](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29197))
* Fixes a missing validation which causes denial of service via `LoadAndRemapMatrix` ([CVE-2022-29199](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29199))
* Fixes a missing validation which causes denial of service via `SparseTensorToCSRSparseMatrix` ([CVE-2022-29198](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29198))
* Fixes a missing validation which causes denial of service via `LSTMBlockCell` ([CVE-2022-29200](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29200))
* Fixes a missing validation which causes denial of service via `Conv3DBackpropFilterV2` ([CVE-2022-29196](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29196))
* Fixes a `CHECK` failure in depthwise ops via overflows ([CVE-2021-41197](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41197))
* Fixes issues arising from undefined behavior stemming from users supplying invalid resource handles ([CVE-2022-29207](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29207))
* Fixes a segfault due to missing support for quantized types ([CVE-2022-29205](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29205))
* Fixes a missing validation which results in undefined behavior in `SparseTensorDenseAdd` ([CVE-2022-29206](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29206))
* Fixes a missing validation which results in undefined behavior in `QuantizedConv2D` ([CVE-2022-29201](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29201))
* Fixes an integer overflow in `SpaceToBatchND` ([CVE-2022-29203](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29203))
* Fixes a segfault and OOB write due to incomplete validation in `EditDistance` ([CVE-2022-29208](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29208))
* Fixes a missing validation which causes denial of service via `Conv3DBackpropFilterV2` ([CVE-2022-29204](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29204))
* Fixes a denial of service in `tf.ragged.constant` due to lack of validation ([CVE-2022-29202](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29202))
* Fixes a segfault when `tf.histogram_fixed_width` is called with NaN values ([CVE-2022-29211](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29211))
* Fixes a core dump when loading TFLite models with quantization ([CVE-2022-29212](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29212))
* Fixes crashes stemming from incomplete validation in signal ops ([CVE-2022-29213](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29213))
* Fixes a type confusion leading to `CHECK`-failure based denial of service ([CVE-2022-29209](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29209))
* Updates `curl` to `7.83.1` to handle ([CVE-2022-22576](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-22576), ([CVE-2022-27774](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27774), ([CVE-2022-27775](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27775), ([CVE-2022-27776](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27776), ([CVE-2022-27778](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27778), ([CVE-2022-27779](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27779), ([CVE-2022-27780](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27780), ([CVE-2022-27781](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27781), ([CVE-2022-27782](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27782) and ([CVE-2022-30115](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-30115)
* Updates `zlib` to `1.2.12` after `1.2.11` was pulled due to [security issue](https://www.openwall.com/lists/oss-security/2022/03/28/1)
```
### 2.7.1
```
This releases introduces several vulnerability fixes:
* Fixes a floating point division by 0 when executing convolution operators
([CVE-2022-21725](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21725))
* Fixes a heap OOB read in shape inference for `ReverseSequence`
([CVE-2022-21728](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21728))
* Fixes a heap OOB access in `Dequantize`
([CVE-2022-21726](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21726))
* Fixes an integer overflow in shape inference for `Dequantize`
([CVE-2022-21727](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21727))
* Fixes a heap OOB access in `FractionalAvgPoolGrad`
([CVE-2022-21730](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21730))
* Fixes an overflow and divide by zero in `UnravelIndex`
([CVE-2022-21729](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21729))
* Fixes a type confusion in shape inference for `ConcatV2`
([CVE-2022-21731](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21731))
* Fixes an OOM in `ThreadPoolHandle`
([CVE-2022-21732](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21732))
* Fixes an OOM due to integer overflow in `StringNGrams`
([CVE-2022-21733](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21733))
* Fixes more issues caused by incomplete validation in boosted trees code
([CVE-2021-41208](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41208))
* Fixes an integer overflows in most sparse component-wise ops
([CVE-2022-23567](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23567))
* Fixes an integer overflows in `AddManySparseToTensorsMap`
([CVE-2022-23568](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23568))
* Fixes a number of `CHECK`-failures in `MapStage`
([CVE-2022-21734](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21734))
* Fixes a division by zero in `FractionalMaxPool`
([CVE-2022-21735](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21735))
* Fixes a number of `CHECK`-fails when building invalid/overflowing tensor
shapes
([CVE-2022-23569](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23569))
* Fixes an undefined behavior in `SparseTensorSliceDataset`
([CVE-2022-21736](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21736))
* Fixes an assertion failure based denial of service via faulty bin count
operations
([CVE-2022-21737](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21737))
* Fixes a reference binding to null pointer in `QuantizedMaxPool`
([CVE-2022-21739](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21739))
* Fixes an integer overflow leading to crash in `SparseCountSparseOutput`
([CVE-2022-21738](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21738))
* Fixes a heap overflow in `SparseCountSparseOutput`
([CVE-2022-21740](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21740))
* Fixes an FPE in `BiasAndClamp` in TFLite
([CVE-2022-23557](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23557))
* Fixes an FPE in depthwise convolutions in TFLite
([CVE-2022-21741](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21741))
* Fixes an integer overflow in TFLite array creation
([CVE-2022-23558](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23558))
* Fixes an integer overflow in TFLite
([CVE-2022-23559](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23559))
* Fixes a dangerous OOB write in TFLite
([CVE-2022-23561](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23561))
* Fixes a vulnerability leading to read and write outside of bounds in TFLite
([CVE-2022-23560](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23560))
* Fixes a set of vulnerabilities caused by using insecure temporary files
([CVE-2022-23563](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23563))
* Fixes an integer overflow in Range resulting in undefined behavior and OOM
([CVE-2022-23562](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23562))
* Fixes a vulnerability where missing validation causes `tf.sparse.split` to
crash when `axis` is a tuple
([CVE-2021-41206](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41206))
* Fixes a `CHECK`-fail when decoding resource handles from proto
([CVE-2022-23564](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23564))
* Fixes a `CHECK`-fail with repeated `AttrDef`
([CVE-2022-23565](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23565))
* Fixes a heap OOB write in Grappler
([CVE-2022-23566](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23566))
* Fixes a `CHECK`-fail when decoding invalid tensors from proto
([CVE-2022-23571](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23571))
* Fixes a null-dereference when specializing tensor type
([CVE-2022-23570](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23570))
* Fixes a crash when type cannot be specialized
([CVE-2022-23572](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23572))
* Fixes a heap OOB read/write in `SpecializeType`
([CVE-2022-23574](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23574))
* Fixes an uninitialized variable access in `AssignOp`
([CVE-2022-23573](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23573))
* Fixes an integer overflow in `OpLevelCostEstimator::CalculateTensorSize`
([CVE-2022-23575](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23575))
* Fixes an integer overflow in `OpLevelCostEstimator::CalculateOutputSize`
([CVE-2022-23576](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23576))
* Fixes a null dereference in `GetInitOp`
([CVE-2022-23577](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23577))
* Fixes a memory leak when a graph node is invalid
([CVE-2022-23578](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23578))
* Fixes an abort caused by allocating a vector that is too large
([CVE-2022-23580](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23580))
* Fixes multiple `CHECK`-failures during Grappler's `IsSimplifiableReshape`
([CVE-2022-23581](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23581))
* Fixes multiple `CHECK`-failures during Grappler's `SafeToRemoveIdentity`
([CVE-2022-23579](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23579))
* Fixes multiple `CHECK`-failures in `TensorByteSize`
([CVE-2022-23582](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23582))
* Fixes multiple `CHECK`-failures in binary ops due to type confusion
([CVE-2022-23583](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23583))
* Fixes a use after free in `DecodePng` kernel
([CVE-2022-23584](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23584))
* Fixes a memory leak in decoding PNG images
([CVE-2022-23585](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23585))
* Fixes multiple `CHECK`-fails in `function.cc`
([CVE-2022-23586](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23586))
* Fixes multiple `CHECK`-fails due to attempting to build a reference tensor
([CVE-2022-23588](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23588))
* Fixes an integer overflow in Grappler cost estimation of crop and resize
operation
([CVE-2022-23587](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23587))
* Fixes a null pointer dereference in Grappler's `IsConstant`
([CVE-2022-23589](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23589))
* Fixes a `CHECK` failure in constant folding
([CVE-2021-41197](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41197))
* Fixes a stack overflow due to self-recursive function in `GraphDef`
([CVE-2022-23591](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23591))
* Fixes a crash due to erroneous `StatusOr`
([CVE-2022-23590](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23590))
* Fixes multiple crashes and heap OOB accesses in TFG dialect (MLIR)
([CVE-2022-23594](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23594))
* Fixes a null pointer dereference in `BuildXlaCompilationCache` (XLA)
([CVE-2022-23595](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23595))
* Updates `icu` to `69.1` to handle
[CVE-2020-10531](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-10531)
```
### 2.7.0
```
Breaking Changes
* `tf.keras`:
* The methods `Model.fit()`, `Model.predict()`, and `Model.evaluate()`
will no longer uprank input data of shape `(batch_size,)` to become
`(batch_size, 1)`. This enables `Model` subclasses to process scalar
data in their `train_step()`/`test_step()`/`predict_step()` methods. \
Note that this change may break certain subclassed models. You can
revert back to the previous behavior by adding upranking yourself in the
`train_step()`/`test_step()`/`predict_step()` methods, e.g. `if
x.shape.rank == 1: x = tf.expand_dims(x, axis=-1)`. Functional models as
well as Sequential models built with an explicit input shape are not
affected.
* The methods `Model.to_yaml()` and `keras.models.model_from_yaml` have
been replaced to raise a `RuntimeError` as they can be abused to cause
arbitrary code execution. It is recommended to use JSON serialization
instead of YAML, or, a better alternative, serialize to H5.
* `LinearModel` and `WideDeepModel` are moved to the
`tf.compat.v1.keras.models.` namespace
(`tf.compat.v1.keras.models.LinearModel` and
`tf.compat.v1.keras.models.WideDeepModel`), and their `experimental`
endpoints (`tf.keras.experimental.models.LinearModel` and
`tf.keras.experimental.models.WideDeepModel`) are being deprecated.
* RNG behavior change for all `tf.keras.initializers` classes. For any
class constructed with a fixed seed, it will no longer generate same
value when invoked multiple times. Instead, it will return different
value, but a deterministic sequence. This change will make the
initialize behavior align between v1 and v2.
* `tf.lite`:
* Rename fields `SignatureDef` table in schema to maximize the parity with
TF SavedModel's Signature concept.
* Deprecate Makefile builds. Makefile users need to migrate their builds
to CMake or Bazel. Please refer to the
[Build TensorFlow Lite with CMake](https://www.tensorflow.org/lite/guide/build_cmake)
and
[Build TensorFlow Lite for ARM boards](https://www.tensorflow.org/lite/guide/build_arm)
for the migration.
* Deprecate `tflite::OpResolver::GetDelegates`. The list returned by
TfLite's `BuiltinOpResolver::GetDelegates` is now always empty. Instead,
recommend using new method `tflite::OpResolver::GetDelegateCreators` in
order to achieve lazy initialization on TfLite delegate instances.
* TF Core:
* `tf.Graph.get_name_scope()` now always returns a string, as documented.
Previously, when called within `name_scope("")` or `name_scope(None)`
contexts, it returned `None`; now it returns the empty string.
* `te
Update tensorflow from 2.1.0 to 2.9.1.
Changelog
### 2.9.1 ``` Add an upper bound for `protobuf` in `setup.py` since `protobuf` after version 3.20 is currently incompatible with TensorFlow. See https://github.com/tensorflow/tensorflow/issues/53234, https://github.com/protocolbuffers/protobuf/issues/9954 and https://github.com/tensorflow/tensorflow/issues/56077. ``` ### 2.9.0 ``` Breaking Changes * Due to security issues in TF 2.8, all boosted trees code has now been removed (after being deprecated in TF 2.8). Users should switch to [TensorFlow Decision Forests](https://github.com/tensorflow/decision-forests). * Build, Compilation and Packaging * TensorFlow is now compiled with `_GLIBCXX_USE_CXX11_ABI=1`. Downstream projects that encounter `std::__cxx11` or `[abi:cxx11]` linker errors will need to adopt this compiler option. See [the GNU C++ Library docs on Dual ABI](https://gcc.gnu.org/onlinedocs/libstdc++/manual/using_dual_abi.html). * TensorFlow Python wheels now specifically conform to [manylinux2014](https://peps.python.org/pep-0599/), an upgrade from manylinux2010. The minimum Pip version supporting manylinux2014 is Pip 19.3 (see [pypa/manylinux](https://github.com/pypa/manylinux). This change may affect you if you have been using TensorFlow on a very old platform equivalent to CentOS 6, as manylinux2014 targets CentOS 7 as a compatibility base. Note that TensorFlow does not officially support either platform. * Discussion for these changes can be found on SIG Build's [TensorFlow Community Forum thread](https://discuss.tensorflow.org/t/tensorflow-linux-wheels-are-being-upgraded-to-manylinux2014/8339) * The `tf.keras.mixed_precision.experimental` API has been removed. The non-experimental symbols under `tf.keras.mixed_precision` have been available since TensorFlow 2.4 and should be used instead. * The non-experimental API has some minor differences from the experimental API. In most cases, you only need to make three minor changes: * Remove the word "experimental" from `tf.keras.mixed_precision` symbols. E.g., replace `tf.keras.mixed_precision.experimental.global_policy` with `tf.keras.mixed_precision.global_policy`. * Replace `tf.keras.mixed_precision.experimental.set_policy` with `tf.keras.mixed_precision.set_global_policy`. The experimental symbol `set_policy` was renamed to `set_global_policy` in the non-experimental API. * Replace `LossScaleOptimizer(opt, "dynamic")` with `LossScaleOptimizer(opt)`. If you pass anything other than `"dynamic"` to the second argument, see (1) of the next section. * In the following rare cases, you need to make more changes when switching to the non-experimental API: * If you passed anything other than `"dynamic"` to the `loss_scale` argument (the second argument) of `LossScaleOptimizer`: * The LossScaleOptimizer constructor takes in different arguments. See the [TF 2.7 documentation of tf.keras.mixed_precision.experimental.LossScaleOptimizer](https://www.tensorflow.org/versions/r2.7/api_docs/python/tf/keras/mixed_precision/experimental/LossScaleOptimizer) for details on the differences, which has examples on how to convert to the non-experimental LossScaleOptimizer. * If you passed a value to the `loss_scale` argument (the second argument) of `Policy`: * The experimental version of `Policy` optionally took in a `tf.compat.v1.mixed_precision.LossScale` in the constructor, which defaulted to a dynamic loss scale for the `"mixed_float16"` policy and no loss scale for other policies. In `Model.compile`, if the model's policy had a loss scale, the optimizer would be wrapped with a `LossScaleOptimizer`. With the non-experimental `Policy`, there is no loss scale associated with the `Policy`, and `Model.compile` wraps the optimizer with a `LossScaleOptimizer` if and only if the policy is a `"mixed_float16"` policy. If you previously passed a `LossScale` to the experimental `Policy`, consider just removing it, as the default loss scaling behavior is usually what you want. If you really want to customize the loss scaling behavior, you can wrap your optimizer with a `LossScaleOptimizer` before passing it to `Model.compile`. * If you use the very rarely-used function `tf.keras.mixed_precision.experimental.get_layer_policy`: * Replace `tf.keras.mixed_precision.experimental.get_layer_policy(layer)` with `layer.dtype_policy`. * `tf.mixed_precision.experimental.LossScale` and its subclasses have been removed from the TF2 namespace. This symbols were very rarely used and were only useful in TF2 for use in the now-removed `tf.keras.mixed_precision.experimental` API. The symbols are still available under `tf.compat.v1.mixed_precision`. * The `experimental_relax_shapes` heuristic for `tf.function` has been deprecated and replaced with `reduce_retracing` which encompasses broader heuristics to reduce the number of retraces (see below) Major Features and Improvements * `tf.keras`: * Added `tf.keras.applications.resnet_rs` models. This includes the `ResNetRS50`, `ResNetRS101`, `ResNetRS152`, `ResNetRS200`, `ResNetRS270`, `ResNetRS350` and `ResNetRS420` model architectures. The ResNetRS models are based on the architecture described in [Revisiting ResNets: Improved Training and Scaling Strategies](https://arxiv.org/pdf/2103.07579.pdf) * Added `tf.keras.optimizers.experimental.Optimizer`. The reworked optimizer gives more control over different phases of optimizer calls, and is easier to customize. We provide Adam, SGD, Adadelta, AdaGrad and RMSprop optimizers based on `tf.keras.optimizers.experimental.Optimizer`. Generally the new optimizers work in the same way as the old ones, but support new constructor arguments. In the future, the symbols `tf.keras.optimizers.Optimizer`/`Adam`/etc will point to the new optimizers, and the previous generation of optimizers will be moved to `tf.keras.optimizers.legacy.Optimizer`/`Adam`/etc. * Added L2 unit normalization layer `tf.keras.layers.UnitNormalization`. * Added `tf.keras.regularizers.OrthogonalRegularizer`, a new regularizer that encourages orthogonality between the rows (or columns) or a weight matrix. * Added `tf.keras.layers.RandomBrightness` layer for image preprocessing. * Added APIs for switching between interactive logging and absl logging. By default, Keras always writes the logs to stdout. However, this is not optimal in a non-interactive environment, where you don't have access to stdout, but can only view the logs. You can use `tf.keras.utils.disable_interactive_logging()` to write the logs to ABSL logging. You can also use `tf.keras.utils.enable_interactive_logging()` to change it back to stdout, or `tf.keras.utils.is_interactive_logging_enabled()` to check if interactive logging is enabled. * Changed default value for the `verbose` argument of `Model.evaluate()` and `Model.predict()` to `"auto"`, which defaults to `verbose=1` for most cases and defaults to `verbose=2` when used with `ParameterServerStrategy` or with interactive logging disabled. * Argument `jit_compile` in `Model.compile()` now applies to `Model.evaluate()` and `Model.predict()`. Setting `jit_compile=True` in `compile()` compiles the model's training, evaluation, and inference steps to [XLA](https://www.tensorflow.org/xla). Note that `jit_compile=True` may not necessarily work for all models. * Added DTensor-related Keras APIs under `tf.keras.dtensor` namespace. The APIs are still classified as experimental. You are welcome to try it out. Please check the tutorial and guide on https://www.tensorflow.org/ for more details about DTensor. * `tf.lite`: * Added TFLite builtin op support for the following TF ops: * `tf.math.argmin`/`tf.math.argmax` for input data type `tf.bool` on CPU. * `tf.nn.gelu` op for output data type `tf.float32` and quantization on CPU. * Add nominal support for unsigned 16-bit integer tensor types. Note that very few TFLite kernels support this type natively, so its use in mobile ML authoring is generally discouraged. * Add support for unsigned 16-bit integer tensor types in cast op. * Experimental support for lowering `list_ops.tensor_list_set_item` with `DynamicUpdateSlice`. * Enabled a new MLIR-based dynamic range quantization backend by default * The new backend is used for post-training int8 dynamic range quantization and post-training float16 quantization. * Set `experimental_new_dynamic_range_quantizer` in tf.lite.TFLiteConverter to False to disable this change * Native TF Lite variables are now enabled during conversion by default on all v2 TfLiteConverter entry points. `experimental_enable_resource_variables` on tf.lite.TFLiteConverter is now True by default and will be removed in the future. * `tf.function`: * Custom classes used as arguments for `tf.function` can now specify rules regarding when retracing needs to occur by implementing the Tracing Protocol available through `tf.types.experimental.SupportsTracingProtocol`. * `TypeSpec` classes (as associated with `ExtensionTypes`) also implement the Tracing Protocol which can be overridden if necessary. * The newly introduced `reduce_retracing` option also uses the Tracing Protocol to proactively generate generalized traces similar to `experimental_relax_shapes` (which has now been deprecated). * Unified eager and `tf.function` execution: * Eager mode can now execute each op as a `tf.function`, allowing for more consistent feature support in future releases. * It is available for immediate use. * See the `TF_RUN_EAGER_OP_AS_FUNCTION` environment variable in [eager context](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/eager/context.py). * Eager performance should be similar with this feature enabled. * A roughly 5us per-op overhead may be observed when running many small functions. * Note a [known issue](https://github.com/tensorflow/tensorflow/issues/55414) with GPU performance. * The behavior of `tf.function` itself is unaffected. * Note: This feature will be enabled by default in an upcoming version of TensorFlow. * `tf.experimental.dtensor`: Added DTensor, an extension to TensorFlow for large-scale modeling with minimal changes to user code. You are welcome to try it out, though be aware that the DTensor API is experimental and up-to backward-incompatible changes. DTensor and Keras integration is published under `tf.keras.dtensor` in this release (refer to the `tf.keras` entry). The tutoral and guide for DTensor will be published on https://www.tensorflow.org/. Please stay tuned. * [oneDNN CPU performance optimizations](https://github.com/tensorflow/community/blob/master/rfcs/20210930-enable-onednn-ops.md) are available in Linux x86, Windows x86, and Linux aarch64 packages. * **Linux x86 packages:** * oneDNN optimizations are *enabled by default* on CPUs with neural-network-focused hardware features such as AVX512_VNNI, AVX512_BF16, AMX, etc. ([Intel Cascade Lake](https://www.intel.com/content/www/us/en/products/platforms/details/cascade-lake.html) and newer CPUs.) * [Example performance speedups.](https://medium.com/intel-analytics-software/leverage-intel-deep-learning-optimizations-in-tensorflow-129faa80ee07) * For older CPUs, oneDNN optimizations are disabled by default. * **Windows x86 package:** oneDNN optimizations are disabled by default. * **Linux aach64 (`--config=mkl_aarch64`) package:** * Experimental oneDNN optimizations are disabled by default. * If you experience issues with oneDNN optimizations on, we recommend turning them off. * To explicitly enable or disable oneDNN optimizations, set the environment variable `TF_ENABLE_ONEDNN_OPTS` to `1` (enable) or `0` (disable) before running TensorFlow. (The variable is checked during `import tensorflow`.) To fall back to default settings, unset the environment variable. * These optimizations can yield slightly different numerical results from when they are off due to floating-point round-off errors from different computation approaches and orders. * To verify that the optimizations are on, look for a message with *"oneDNN custom operations are on"* in the log. If the exact phrase is not there, it means they are off. Bug Fixes and Other Changes * `tf.data`: * Fixed bug in `tf.data.experimental.parse_example_dataset` when `tf.io.RaggedFeatures` would specify `value_key` but no `partitions`. Before the fix, setting `value_key` but no `partitions` would result in the feature key being replaced by the value key, e.g. `{'value_key': <RaggedTensor>}` instead of `{'key': <RaggedTensor>}`. Now the correct feature key will be used. This aligns the behavior of `tf.data.experimental.parse_example_dataset` to match the behavior of `tf.io.parse_example`. * Added a new field, `filter_parallelization`, to `tf.data.experimental.OptimizationOptions`. If it is set to `True`, tf.data will run `Filter` transformation with multiple threads. Its default value is `False` if not specified. * `tf.keras`: * Fixed bug in optimizers that prevented them from properly checkpointing slot variables when they are `ShardedVariable`s (used for training with `tf.distribute.experimental.ParameterServerStrategy`). * `tf.random`: * Added `tf.random.experimental.index_shuffle`, for shuffling a sequence without materializing the sequence in memory. * `tf.RaggedTensor`: * Introduced `tf.experimental.RowPartition`, which encodes how one dimension in a RaggedTensor relates to another, into the public API. * Introduced `tf.experimental.DynamicRaggedShape`, which represents the shape of a RaggedTensor. Security * Fixes a code injection in `saved_model_cli` ([CVE-2022-29216](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29216)) * Fixes a missing validation which causes `TensorSummaryV2` to crash ([CVE-2022-29193](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29193)) * Fixes a missing validation which crashes `QuantizeAndDequantizeV4Grad` ([CVE-2022-29192](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29192)) * Fixes a missing validation which causes denial of service via `DeleteSessionTensor` ([CVE-2022-29194](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29194)) * Fixes a missing validation which causes denial of service via `GetSessionTensor` ([CVE-2022-29191](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29191)) * Fixes a missing validation which causes denial of service via `StagePeek` ([CVE-2022-29195](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29195)) * Fixes a missing validation which causes denial of service via `UnsortedSegmentJoin` ([CVE-2022-29197](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29197)) * Fixes a missing validation which causes denial of service via `LoadAndRemapMatrix` ([CVE-2022-29199](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29199)) * Fixes a missing validation which causes denial of service via `SparseTensorToCSRSparseMatrix` ([CVE-2022-29198](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29198)) * Fixes a missing validation which causes denial of service via `LSTMBlockCell` ([CVE-2022-29200](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29200)) * Fixes a missing validation which causes denial of service via `Conv3DBackpropFilterV2` ([CVE-2022-29196](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29196)) * Fixes a `CHECK` failure in depthwise ops via overflows ([CVE-2021-41197](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41197)) * Fixes issues arising from undefined behavior stemming from users supplying invalid resource handles ([CVE-2022-29207](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29207)) * Fixes a segfault due to missing support for quantized types ([CVE-2022-29205](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29205)) * Fixes a missing validation which results in undefined behavior in `SparseTensorDenseAdd` ([CVE-2022-29206](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29206)) * Fixes a missing validation which results in undefined behavior in `QuantizedConv2D` ([CVE-2022-29201](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29201)) * Fixes an integer overflow in `SpaceToBatchND` ([CVE-2022-29203](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29203)) * Fixes a segfault and OOB write due to incomplete validation in `EditDistance` ([CVE-2022-29208](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29208)) * Fixes a missing validation which causes denial of service via `Conv3DBackpropFilterV2` ([CVE-2022-29204](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29204)) * Fixes a denial of service in `tf.ragged.constant` due to lack of validation ([CVE-2022-29202](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29202)) * Fixes a segfault when `tf.histogram_fixed_width` is called with NaN values ([CVE-2022-29211](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29211)) * Fixes a core dump when loading TFLite models with quantization ([CVE-2022-29212](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29212)) * Fixes crashes stemming from incomplete validation in signal ops ([CVE-2022-29213](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29213)) * Fixes a type confusion leading to `CHECK`-failure based denial of service ([CVE-2022-29209](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29209)) * Fixes a heap buffer overflow due to incorrect hash function ([CVE-2022-29210](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29210)) * Updates `curl` to `7.83.1` to handle ([CVE-2022-22576](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-22576), ([CVE-2022-27774](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27774), ([CVE-2022-27775](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27775), ([CVE-2022-27776](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27776), ([CVE-2022-27778](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27778), ([CVE-2022-27779](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27779), ([CVE-2022-27780](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27780), ([CVE-2022-27781](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27781), ([CVE-2022-27782](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27782) and ([CVE-2022-30115](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-30115) * Updates `zlib` to `1.2.12` after `1.2.11` was pulled due to [security issue](https://www.openwall.com/lists/oss-security/2022/03/28/1) Thanks to our Contributors This release contains contributions from many people at Google, as well as: Aaron Debattista, Abel Soares Siqueira, Abhishek Varma, Andrei Ivanov, andreii, Andrew Goodbody, apeltop, Arnab Dutta, Ashiq Imran, Banikumar Maiti (Intel Aipg), Ben Greiner, Benjamin Peterson, bhack, Christopher Bate, chunduriv, Copybara-Service, DEKHTIARJonathan, Deven Desai, Duncan Riach, Eric Kunze, Everton Constantino, Faruk D, Fredrik Knutsson, gadagashwini, Gauri1 Deshpande, gtiHibGele, Guozhong Zhuang, Islem-Esi, Ivanov Viktor, Jason Furmanek, Jason Zaman, Jim, Jinzhe Zeng, John Laxson, Jonas Eschle, Jonas Eschle 'Mayou36, Jonathan Dekhtiar, Kaixi Hou, Kanvi Khanna, KaurkerDevourer, Koan-Sin Tan, kushanam, Laramie Leavitt, Li-Wen Chang, lipracer, Louis Sugy, Lu Teng, Mahmoud Abuzaina, Malcolm Slaney, Malik Shahzad Muzaffar, Marek Šuppa, Matt Conley, Michael Melesse, Milos Puzovic, mohantym, Nathan John Sircombe, Nathan Luehr, Nilesh Agarwalla, Patrice Vignola, peterjc123, Philip Turner, Rajeshwar Reddy T, Robert Kalmar, Rodrigo Formigone, Rohit Santhanam, rui, Sachin Muradi, Saduf2019, sandip, Scott Leishman, Serge Panev, Shi,Guangyong, Srinivasan Narayanamoorthy, stanley, Steven I Reeves, stevenireeves, sushreebarsa, Tamas Bela Feher, Tao He, Thomas Schmeyer, Tiago Almeida, Trevor Morris, Uday Bondhugula, Uwe L. Korn, Varghese, Jojimon, Vishnuvardhan Janapati, William Muir, William Raveane, xutianming, Yasuhiro Matsumoto, Yimei Sun, Yong Tang, Yu Feng, Yuriy Chernyshov, zhaozheng09 ``` ### 2.8.2 ``` Add an upper bound for `protobuf` in `setup.py` since `protobuf` after version 3.20 is currently incompatible with TensorFlow. See https://github.com/tensorflow/tensorflow/issues/53234, https://github.com/protocolbuffers/protobuf/issues/9954 and https://github.com/tensorflow/tensorflow/issues/56077. ``` ### 2.8.1 ``` This releases introduces several vulnerability fixes: * Fixes a code injection in `saved_model_cli` ([CVE-2022-29216](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29216)) * Fixes a missing validation which causes `TensorSummaryV2` to crash ([CVE-2022-29193](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29193)) * Fixes a missing validation which crashes `QuantizeAndDequantizeV4Grad` ([CVE-2022-29192](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29192)) * Fixes a missing validation which causes denial of service via `DeleteSessionTensor` ([CVE-2022-29194](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29194)) * Fixes a missing validation which causes denial of service via `GetSessionTensor` ([CVE-2022-29191](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29191)) * Fixes a missing validation which causes denial of service via `StagePeek` ([CVE-2022-29195](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29195)) * Fixes a missing validation which causes denial of service via `UnsortedSegmentJoin` ([CVE-2022-29197](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29197)) * Fixes a missing validation which causes denial of service via `LoadAndRemapMatrix` ([CVE-2022-29199](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29199)) * Fixes a missing validation which causes denial of service via `SparseTensorToCSRSparseMatrix` ([CVE-2022-29198](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29198)) * Fixes a missing validation which causes denial of service via `LSTMBlockCell` ([CVE-2022-29200](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29200)) * Fixes a missing validation which causes denial of service via `Conv3DBackpropFilterV2` ([CVE-2022-29196](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29196)) * Fixes a `CHECK` failure in depthwise ops via overflows ([CVE-2021-41197](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41197)) * Fixes issues arising from undefined behavior stemming from users supplying invalid resource handles ([CVE-2022-29207](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29207)) * Fixes a segfault due to missing support for quantized types ([CVE-2022-29205](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29205)) * Fixes a missing validation which results in undefined behavior in `SparseTensorDenseAdd` ([CVE-2022-29206](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29206)) * Fixes a missing validation which results in undefined behavior in `QuantizedConv2D` ([CVE-2022-29201](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29201)) * Fixes an integer overflow in `SpaceToBatchND` ([CVE-2022-29203](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29203)) * Fixes a segfault and OOB write due to incomplete validation in `EditDistance` ([CVE-2022-29208](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29208)) * Fixes a missing validation which causes denial of service via `Conv3DBackpropFilterV2` ([CVE-2022-29204](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29204)) * Fixes a denial of service in `tf.ragged.constant` due to lack of validation ([CVE-2022-29202](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29202)) * Fixes a segfault when `tf.histogram_fixed_width` is called with NaN values ([CVE-2022-29211](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29211)) * Fixes a core dump when loading TFLite models with quantization ([CVE-2022-29212](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29212)) * Fixes crashes stemming from incomplete validation in signal ops ([CVE-2022-29213](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29213)) * Fixes a type confusion leading to `CHECK`-failure based denial of service ([CVE-2022-29209](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29209)) * Fixes a heap buffer overflow due to incorrect hash function ([CVE-2022-29210](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29210)) * Updates `curl` to `7.83.1` to handle ([CVE-2022-22576](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-22576), ([CVE-2022-27774](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27774), ([CVE-2022-27775](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27775), ([CVE-2022-27776](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27776), ([CVE-2022-27778](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27778), ([CVE-2022-27779](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27779), ([CVE-2022-27780](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27780), ([CVE-2022-27781](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27781), ([CVE-2022-27782](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27782) and ([CVE-2022-30115](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-30115) * Updates `zlib` to `1.2.12` after `1.2.11` was pulled due to [security issue](https://www.openwall.com/lists/oss-security/2022/03/28/1) ``` ### 2.8.0 ``` Major Features and Improvements * `tf.lite`: * Added TFLite builtin op support for the following TF ops: * `tf.raw_ops.Bucketize` op on CPU. * `tf.where` op for data types `tf.int32`/`tf.uint32`/`tf.int8`/`tf.uint8`/`tf.int64`. * `tf.random.normal` op for output data type `tf.float32` on CPU. * `tf.random.uniform` op for output data type `tf.float32` on CPU. * `tf.random.categorical` op for output data type `tf.int64` on CPU. * `tensorflow.experimental.tensorrt`: * `conversion_params` is now deprecated inside `TrtGraphConverterV2` in favor of direct arguments: `max_workspace_size_bytes`, `precision_mode`, `minimum_segment_size`, `maximum_cached_engines`, `use_calibration` and `allow_build_at_runtime`. * Added a new parameter called `save_gpu_specific_engines` to the `.save()` function inside `TrtGraphConverterV2`. When `False`, the `.save()` function won't save any TRT engines that have been built. When `True` (default), the original behavior is preserved. * `TrtGraphConverterV2` provides a new API called `.summary()` which outputs a summary of the inference converted by TF-TRT. It namely shows each `TRTEngineOp` with their input(s)' and output(s)' shape and dtype. A detailed version of the summary is available which prints additionally all the TensorFlow OPs included in each of the `TRTEngineOp`s. * `tf.tpu.experimental.embedding`: * `tf.tpu.experimental.embedding.FeatureConfig` now takes an additional argument `output_shape` which can specify the shape of the output activation for the feature. * `tf.tpu.experimental.embedding.TPUEmbedding` now has the same behavior as `tf.tpu.experimental.embedding.serving_embedding_lookup` which can take arbitrary rank of dense and sparse tensor. For ragged tensor, though the input tensor remains to be rank 2, the activations now can be rank 2 or above by specifying the output shape in the feature config or via the build method. * Add [`tf.config.experimental.enable_op_determinism`](https://www.tensorflow.org/api_docs/python/tf/config/experimental/enable_op_determinism), which makes TensorFlow ops run deterministically at the cost of performance. Replaces the `TF_DETERMINISTIC_OPS` environmental variable, which is now deprecated. The "Bug Fixes and Other Changes" section lists more determinism-related changes. * (Since TF 2.7) Add [PluggableDevice](https://blog.tensorflow.org/2021/06/pluggabledevice-device-plugins-for-TensorFlow.html) support to [TensorFlow Profiler](https://github.com/tensorflow/community/blob/master/rfcs/20210513-pluggable-profiler-for-tensorflow.md). Bug Fixes and Other Changes * `tf.data`: * Fixed a bug where setting `options.deterministic = False` would only modify one transformation to run non-deterministically, leaving other transformations deterministic. The option will now apply the same across all transformations. * The optimization `parallel_batch` now becomes default if not disabled by users, which will parallelize copying of batch elements. * Added the ability for `TensorSliceDataset` to identify and handle inputs that are files. This enables creating hermetic SavedModels when using datasets created from files. * `tf.lite`: * Adds GPU Delegation support for serialization to Java API. This boosts initialization time up to 90% when OpenCL is available. * Deprecated `Interpreter::SetNumThreads`, in favor of `InterpreterBuilder::SetNumThreads`. * `tf.keras`: * Adds `tf.compat.v1.keras.utils.get_or_create_layer` to aid migration to TF2 by enabling tracking of nested keras models created in TF1-style, when used with the `tf.compat.v1.keras.utils.track_tf1_style_variables` decorator. * Added a `tf.keras.layers.experimental.preprocessing.HashedCrossing` layer which applies the hashing trick to the concatenation of crossed scalar inputs. This provides a stateless way to try adding feature crosses of integer or string data to a model. * Removed `keras.layers.experimental.preprocessing.CategoryCrossing`. Users should migrate to the `HashedCrossing` layer or use `tf.sparse.cross`/`tf.ragged.cross` directly. * Added additional `standardize` and `split` modes to `TextVectorization`: * `standardize="lower"` will lowercase inputs. * `standardize="string_punctuation"` will remove all punctuation. * `split="character"` will split on every unicode character. * Added an `output_mode` argument to the `Discretization` and `Hashing` layers with the same semantics as other preprocessing layers. All categorical preprocessing layers now support `output_mode`. * All preprocessing layer output will follow the compute dtype of a `tf.keras.mixed_precision.Policy`, unless constructed with `output_mode="int"` in which case output will be `tf.int64`. The output type of any preprocessing layer can be controlled individually by passing a `dtype` argument to the layer. * `tf.random.Generator` for keras initializers and all RNG code. * Added 3 new APIs for enable/disable/check the usage of `tf.random.Generator` in keras backend, which will be the new backend for all the RNG in Keras. We plan to switch on the new code path by default in tf 2.8, and the behavior change will likely to cause some breakage on user side (eg if the test is checking against some golden number). These 3 APIs will allow user to disable and switch back to legacy behavior if they prefer. In future (eg TF 2.10), we expect to totally remove the legacy code path (stateful random Ops), and these 3 APIs will be removed as well. * `tf.keras.callbacks.experimental.BackupAndRestore` is now available as `tf.keras.callbacks.BackupAndRestore`. The experimental endpoint is deprecated and will be removed in a future release. * `tf.keras.experimental.SidecarEvaluator` is now available as `tf.keras.utils.SidecarEvaluator`. The experimental endpoint is deprecated and will be removed in a future release. * Metrics update and collection logic in default `Model.train_step()` is now customizable via overriding `Model.compute_metrics()`. * Losses computation logic in default `Model.train_step()` is now customizable via overriding `Model.compute_loss()`. * `jit_compile` added to `Model.compile()` on an opt-in basis to compile the model's training step with [XLA](https://www.tensorflow.org/xla). Note that `jit_compile=True` may not necessarily work for all models. * Deterministic Op Functionality: * Fix regression in deterministic selection of deterministic cuDNN convolution algorithms, a regression that was introduced in v2.5. Note that nondeterministic out-of-memory events while selecting algorithms could still lead to nondeterminism, although this is very unlikely. This additional, unlikely source will be eliminated in a later version. * Add deterministic GPU implementations of: * `tf.function(jit_compile=True)`'s that use `Scatter`. * (since v2.7) Stateful ops used in `tf.data.Dataset` * (since v2.7) `tf.convert_to_tensor` when fed with (sparse) `tf.IndexedSlices` (because it uses `tf.math.unsorted_segment_sum`) * (since v2.7) `tf.gather` backprop (because `tf.convert_to_tensor` reduces `tf.gather`'s (sparse) `tf.IndexedSlices` gradients into its dense `params` input) * (since v2.7) `tf.math.segment_mean` * (since v2.7) `tf.math.segment_prod` * (since v2.7) `tf.math.segment_sum` * (since v2.7) `tf.math.unsorted_segment_mean` * (since v2.7) `tf.math.unsorted_segment_prod` * (since v2.7) `tf.math.unsorted_segment_sum` * (since v2.7) `tf.math.unsorted_segment_sqrt` * (since v2.7) `tf.nn.ctc_loss` (resolved, possibly in prior release, and confirmed with tests) * (since v2.7)`tf.nn.sparse_softmax_crossentropy_with_logits` * (since v2.7) Run `tf.scatter_nd` and other related scatter functions, such as `tf.tensor_scatter_nd_update`, on CPU (with significant performance penalty). * Add determinism-unimplemented exception-throwing to the following ops. When op-determinism is expected (i.e. after `tf.config.experimental.enable_op_determinism` has been called), an attempt to use the specified paths through the following ops on a GPU will cause `tf.errors.UnimplementedError` (with an understandable message), unless otherwise specified, to be thrown. * `FakeQuantWithMinMaxVarsGradient` and `FakeQuantWithMinMaxVarsPerChannelGradient` * (since v2.7) `tf.compat.v1.get_seed` if the global random seed has not yet been set (via `tf.random.set_seed`). Throws `RuntimeError` from Python or `InvalidArgument` from C++ * (since v2.7) `tf.compat.v1.nn.fused_batch_norm` backprop to `offset` when `is_training=False` * (since v2.7) `tf.image.adjust_contrast` forward * (since v2.7) `tf.image.resize` with `method=ResizeMethod.NEAREST` backprop * (since v2.7) `tf.linalg.svd` * (since v2.7) `tf.math.bincount` * (since v2.7) `tf.nn.depthwise_conv2d` backprop to `filter` when not using cuDNN convolution * (since v2.7) `tf.nn.dilation2d` gradient * (since v2.7) `tf.nn.max_pool_with_argmax` gradient * (since v2.7) `tf.raw_ops.DebugNumericSummary` and `tf.raw_ops.DebugNumericSummaryV2` * (since v2.7) `tf.timestamp`. Throws `FailedPrecondition` * (since v2.7) `tf.Variable.scatter_add` (and other scatter methods, both on ref and resource variables) * (since v2.7) The random-number-generating ops in the `tf.random` module when the global random seed has not yet been set (via `tf.random.set_seed`). Throws `RuntimeError` from Python or `InvalidArgument` from C++ * TensorFlow-oneDNN no longer supports [explicit use of oneDNN blocked tensor format](https://github.com/tensorflow/tensorflow/pull/53288), e.g., setting the environment variable `TF_ENABLE_MKL_NATIVE_FORMAT` will not have any effect. * TensorFlow has been validated on Windows Subsystem for Linux 2 (aka WSL 2) for both GPUs and CPUs. * Due to security issues (see section below), all boosted trees code has been deprecated. Users should switch to [TensorFlow Decision Forests](https://github.com/tensorflow/decision-forests). TF's boosted trees code will be eliminated before the branch cut for TF 2.9 and will no longer be present since that release. Security * Fixes a floating point division by 0 when executing convolution operators ([CVE-2022-21725](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21725)) * Fixes a heap OOB read in shape inference for `ReverseSequence` ([CVE-2022-21728](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21728)) * Fixes a heap OOB access in `Dequantize` ([CVE-2022-21726](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21726)) * Fixes an integer overflow in shape inference for `Dequantize` ([CVE-2022-21727](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21727)) * Fixes a heap OOB access in `FractionalAvgPoolGrad` ([CVE-2022-21730](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21730)) * Fixes an overflow and divide by zero in `UnravelIndex` ([CVE-2022-21729](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21729)) * Fixes a type confusion in shape inference for `ConcatV2` ([CVE-2022-21731](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21731)) * Fixes an OOM in `ThreadPoolHandle` ([CVE-2022-21732](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21732)) * Fixes an OOM due to integer overflow in `StringNGrams` ([CVE-2022-21733](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21733)) * Fixes more issues caused by incomplete validation in boosted trees code ([CVE-2021-41208](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41208)) * Fixes an integer overflows in most sparse component-wise ops ([CVE-2022-23567](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23567)) * Fixes an integer overflows in `AddManySparseToTensorsMap` ([CVE-2022-23568](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23568)) * Fixes a number of `CHECK`-failures in `MapStage` ([CVE-2022-21734](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21734)) * Fixes a division by zero in `FractionalMaxPool` ([CVE-2022-21735](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21735)) * Fixes a number of `CHECK`-fails when building invalid/overflowing tensor shapes ([CVE-2022-23569](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23569)) * Fixes an undefined behavior in `SparseTensorSliceDataset` ([CVE-2022-21736](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21736)) * Fixes an assertion failure based denial of service via faulty bin count operations ([CVE-2022-21737](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21737)) * Fixes a reference binding to null pointer in `QuantizedMaxPool` ([CVE-2022-21739](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21739)) * Fixes an integer overflow leading to crash in `SparseCountSparseOutput` ([CVE-2022-21738](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21738)) * Fixes a heap overflow in `SparseCountSparseOutput` ([CVE-2022-21740](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21740)) * Fixes an FPE in `BiasAndClamp` in TFLite ([CVE-2022-23557](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23557)) * Fixes an FPE in depthwise convolutions in TFLite ([CVE-2022-21741](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21741)) * Fixes an integer overflow in TFLite array creation ([CVE-2022-23558](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23558)) * Fixes an integer overflow in TFLite ([CVE-2022-23559](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23559)) * Fixes a dangerous OOB write in TFLite ([CVE-2022-23561](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23561)) * Fixes a vulnerability leading to read and write outside of bounds in TFLite ([CVE-2022-23560](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23560)) * Fixes a set of vulnerabilities caused by using insecure temporary files ([CVE-2022-23563](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23563)) * Fixes an integer overflow in Range resulting in undefined behavior and OOM ([CVE-2022-23562](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23562)) * Fixes a vulnerability where missing validation causes `tf.sparse.split` to crash when `axis` is a tuple ([CVE-2021-41206](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41206)) * Fixes a `CHECK`-fail when decoding resource handles from proto ([CVE-2022-23564](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23564)) * Fixes a `CHECK`-fail with repeated `AttrDef` ([CVE-2022-23565](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23565)) * Fixes a heap OOB write in Grappler ([CVE-2022-23566](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23566)) * Fixes a `CHECK`-fail when decoding invalid tensors from proto ([CVE-2022-23571](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23571)) * Fixes a null-dereference when specializing tensor type ([CVE-2022-23570](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23570)) * Fixes a crash when type cannot be specialized ([CVE-2022-23572](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23572)) * Fixes a heap OOB read/write in `SpecializeType` ([CVE-2022-23574](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23574)) * Fixes an unitialized variable access in `AssignOp` ([CVE-2022-23573](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23573)) * Fixes an integer overflow in `OpLevelCostEstimator::CalculateTensorSize` ([CVE-2022-23575](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23575)) * Fixes an integer overflow in `OpLevelCostEstimator::CalculateOutputSize` ([CVE-2022-23576](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23576)) * Fixes a null dereference in `GetInitOp` ([CVE-2022-23577](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23577)) * Fixes a memory leak when a graph node is invalid ([CVE-2022-23578](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23578)) * Fixes an abort caused by allocating a vector that is too large ([CVE-2022-23580](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23580)) * Fixes multiple `CHECK`-failures during Grappler's `IsSimplifiableReshape` ([CVE-2022-23581](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23581)) * Fixes multiple `CHECK`-failures during Grappler's `SafeToRemoveIdentity` ([CVE-2022-23579](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23579)) * Fixes multiple `CHECK`-failures in `TensorByteSize` ([CVE-2022-23582](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23582)) * Fixes multiple `CHECK`-failures in binary ops due to type confusion ([CVE-2022-23583](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23583)) * Fixes a use after free in `DecodePng` kernel ([CVE-2022-23584](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23584)) * Fixes a memory leak in decoding PNG images ([CVE-2022-23585](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23585)) * Fixes multiple `CHECK`-fails in `function.cc` ([CVE-2022-23586](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23586)) * Fixes multiple `CHECK`-fails due to attempting to build a reference tensor ([CVE-2022-23588](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23588)) * Fixes an integer overflow in Grappler cost estimation of crop and resize operation ([CVE-2022-23587](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23587)) * Fixes a null pointer dereference in Grappler's `IsConstant` ([CVE-2022-23589](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23589)) * Fixes a `CHECK` failure in constant folding ([CVE-2021-41197](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41197)) * Fixes a stack overflow due to self-recursive function in `GraphDef` ([CVE-2022-23591](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23591)) * Fixes a heap OOB access in `RunForwardTypeInference` ([CVE-2022-23592](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23592)) * Fixes a crash due to erroneous `StatusOr` ([CVE-2022-23590](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23590)) * Fixes multiple crashes and heap OOB accesses in TFG dialect (MLIR) ([CVE-2022-23594](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23594)) * Fixes a segfault in `simplifyBroadcast` (MLIR) ([CVE-2022-23593](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23593)) * Fixes a null pointer dereference in `BuildXlaCompilationCache` (XLA) ([CVE-2022-23595](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23595)) * Updates `icu` to `69.1` to handle [CVE-2020-10531](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-10531) Thanks to our Contributors This release contains contributions from many people at Google, as well as: 8bitmp3, Adam Lanicek, ag.ramesh, alesapin, Andrew Goodbody, annasuheyla, Ariel Elkin, Arnab Dutta, Ben Barsdell, bhack, cfRod, Chengji Yao, Christopher Bate, dan, Dan F-M, David Korczynski, DEKHTIARJonathan, dengzhiyuan, Deven Desai, Duncan Riach, Eli Osherovich, Ewout Ter Hoeven, ez2take, Faijul Amin, fo40225, Frederic Bastien, gadagashwini, Gauri1 Deshpande, Georgiy Manuilov, Guilherme De Lázari, Guozhong Zhuang, H1Gdev, homuler, Hongxu Jia, Jacky_Yin, jayfurmanek, jgehw, Jhalak Patel, Jinzhe Zeng, Johan Gunnarsson, Jonathan Dekhtiar, Kaixi Hou, Kanvi Khanna, Kevin Cheng, Koan-Sin Tan, Kruglov-Dmitry, Kun Lu, Lemo, Lequn Chen, long.chen, Louis Sugy, Mahmoud Abuzaina, Mao, Marius Brehler, Mark Harfouche, Martin Patz, Maxiwell S. Garcia, Meenakshi Venkataraman, Michael Melesse, Mrinal Tyagi, Måns Nilsson, Nathan John Sircombe, Nathan Luehr, Nilesh Agarwalla, Oktay Ozturk, Patrice Vignola, Pawel-Polyai, Rama Ketineni, Ramesh Sampath, Reza Rahimi, Rob Suderman, Robert Kalmar, Rohit Santhanam, Sachin Muradi, Saduf2019, Samuel Marks, Shi,Guangyong, Sidong-Wei, Srinivasan Narayanamoorthy, Srishti Srivastava, Steven I Reeves, stevenireeves, Supernovae, Tamas Bela Feher, Tao Xu, Thibaut Goetghebuer-Planchon, Thomas Schmeyer, tilakrayal, Valery Mironov, Victor Guo, Vignesh Kothapalli, Vishnuvardhan Janapati, wamuir, Wang,Quintin, William Muir, William Raveane, Yash Goel, Yimei Sun, Yong Tang, Yuduo Wu ``` ### 2.7.3 ``` Add an upper bound for `protobuf` in `setup.py` since `protobuf` after version 3.20 is currently incompatible with TensorFlow. See https://github.com/tensorflow/tensorflow/issues/53234, https://github.com/protocolbuffers/protobuf/issues/9954 and https://github.com/tensorflow/tensorflow/issues/56077. ``` ### 2.7.2 ``` This releases introduces several vulnerability fixes: * Fixes a code injection in `saved_model_cli` ([CVE-2022-29216](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29216)) * Fixes a missing validation which causes `TensorSummaryV2` to crash ([CVE-2022-29193](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29193)) * Fixes a missing validation which crashes `QuantizeAndDequantizeV4Grad` ([CVE-2022-29192](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29192)) * Fixes a missing validation which causes denial of service via `DeleteSessionTensor` ([CVE-2022-29194](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29194)) * Fixes a missing validation which causes denial of service via `GetSessionTensor` ([CVE-2022-29191](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29191)) * Fixes a missing validation which causes denial of service via `StagePeek` ([CVE-2022-29195](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29195)) * Fixes a missing validation which causes denial of service via `UnsortedSegmentJoin` ([CVE-2022-29197](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29197)) * Fixes a missing validation which causes denial of service via `LoadAndRemapMatrix` ([CVE-2022-29199](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29199)) * Fixes a missing validation which causes denial of service via `SparseTensorToCSRSparseMatrix` ([CVE-2022-29198](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29198)) * Fixes a missing validation which causes denial of service via `LSTMBlockCell` ([CVE-2022-29200](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29200)) * Fixes a missing validation which causes denial of service via `Conv3DBackpropFilterV2` ([CVE-2022-29196](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29196)) * Fixes a `CHECK` failure in depthwise ops via overflows ([CVE-2021-41197](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41197)) * Fixes issues arising from undefined behavior stemming from users supplying invalid resource handles ([CVE-2022-29207](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29207)) * Fixes a segfault due to missing support for quantized types ([CVE-2022-29205](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29205)) * Fixes a missing validation which results in undefined behavior in `SparseTensorDenseAdd` ([CVE-2022-29206](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29206)) * Fixes a missing validation which results in undefined behavior in `QuantizedConv2D` ([CVE-2022-29201](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29201)) * Fixes an integer overflow in `SpaceToBatchND` ([CVE-2022-29203](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29203)) * Fixes a segfault and OOB write due to incomplete validation in `EditDistance` ([CVE-2022-29208](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29208)) * Fixes a missing validation which causes denial of service via `Conv3DBackpropFilterV2` ([CVE-2022-29204](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29204)) * Fixes a denial of service in `tf.ragged.constant` due to lack of validation ([CVE-2022-29202](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29202)) * Fixes a segfault when `tf.histogram_fixed_width` is called with NaN values ([CVE-2022-29211](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29211)) * Fixes a core dump when loading TFLite models with quantization ([CVE-2022-29212](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29212)) * Fixes crashes stemming from incomplete validation in signal ops ([CVE-2022-29213](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29213)) * Fixes a type confusion leading to `CHECK`-failure based denial of service ([CVE-2022-29209](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29209)) * Updates `curl` to `7.83.1` to handle ([CVE-2022-22576](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-22576), ([CVE-2022-27774](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27774), ([CVE-2022-27775](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27775), ([CVE-2022-27776](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27776), ([CVE-2022-27778](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27778), ([CVE-2022-27779](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27779), ([CVE-2022-27780](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27780), ([CVE-2022-27781](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27781), ([CVE-2022-27782](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27782) and ([CVE-2022-30115](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-30115) * Updates `zlib` to `1.2.12` after `1.2.11` was pulled due to [security issue](https://www.openwall.com/lists/oss-security/2022/03/28/1) ``` ### 2.7.1 ``` This releases introduces several vulnerability fixes: * Fixes a floating point division by 0 when executing convolution operators ([CVE-2022-21725](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21725)) * Fixes a heap OOB read in shape inference for `ReverseSequence` ([CVE-2022-21728](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21728)) * Fixes a heap OOB access in `Dequantize` ([CVE-2022-21726](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21726)) * Fixes an integer overflow in shape inference for `Dequantize` ([CVE-2022-21727](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21727)) * Fixes a heap OOB access in `FractionalAvgPoolGrad` ([CVE-2022-21730](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21730)) * Fixes an overflow and divide by zero in `UnravelIndex` ([CVE-2022-21729](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21729)) * Fixes a type confusion in shape inference for `ConcatV2` ([CVE-2022-21731](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21731)) * Fixes an OOM in `ThreadPoolHandle` ([CVE-2022-21732](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21732)) * Fixes an OOM due to integer overflow in `StringNGrams` ([CVE-2022-21733](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21733)) * Fixes more issues caused by incomplete validation in boosted trees code ([CVE-2021-41208](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41208)) * Fixes an integer overflows in most sparse component-wise ops ([CVE-2022-23567](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23567)) * Fixes an integer overflows in `AddManySparseToTensorsMap` ([CVE-2022-23568](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23568)) * Fixes a number of `CHECK`-failures in `MapStage` ([CVE-2022-21734](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21734)) * Fixes a division by zero in `FractionalMaxPool` ([CVE-2022-21735](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21735)) * Fixes a number of `CHECK`-fails when building invalid/overflowing tensor shapes ([CVE-2022-23569](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23569)) * Fixes an undefined behavior in `SparseTensorSliceDataset` ([CVE-2022-21736](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21736)) * Fixes an assertion failure based denial of service via faulty bin count operations ([CVE-2022-21737](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21737)) * Fixes a reference binding to null pointer in `QuantizedMaxPool` ([CVE-2022-21739](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21739)) * Fixes an integer overflow leading to crash in `SparseCountSparseOutput` ([CVE-2022-21738](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21738)) * Fixes a heap overflow in `SparseCountSparseOutput` ([CVE-2022-21740](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21740)) * Fixes an FPE in `BiasAndClamp` in TFLite ([CVE-2022-23557](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23557)) * Fixes an FPE in depthwise convolutions in TFLite ([CVE-2022-21741](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21741)) * Fixes an integer overflow in TFLite array creation ([CVE-2022-23558](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23558)) * Fixes an integer overflow in TFLite ([CVE-2022-23559](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23559)) * Fixes a dangerous OOB write in TFLite ([CVE-2022-23561](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23561)) * Fixes a vulnerability leading to read and write outside of bounds in TFLite ([CVE-2022-23560](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23560)) * Fixes a set of vulnerabilities caused by using insecure temporary files ([CVE-2022-23563](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23563)) * Fixes an integer overflow in Range resulting in undefined behavior and OOM ([CVE-2022-23562](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23562)) * Fixes a vulnerability where missing validation causes `tf.sparse.split` to crash when `axis` is a tuple ([CVE-2021-41206](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41206)) * Fixes a `CHECK`-fail when decoding resource handles from proto ([CVE-2022-23564](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23564)) * Fixes a `CHECK`-fail with repeated `AttrDef` ([CVE-2022-23565](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23565)) * Fixes a heap OOB write in Grappler ([CVE-2022-23566](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23566)) * Fixes a `CHECK`-fail when decoding invalid tensors from proto ([CVE-2022-23571](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23571)) * Fixes a null-dereference when specializing tensor type ([CVE-2022-23570](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23570)) * Fixes a crash when type cannot be specialized ([CVE-2022-23572](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23572)) * Fixes a heap OOB read/write in `SpecializeType` ([CVE-2022-23574](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23574)) * Fixes an uninitialized variable access in `AssignOp` ([CVE-2022-23573](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23573)) * Fixes an integer overflow in `OpLevelCostEstimator::CalculateTensorSize` ([CVE-2022-23575](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23575)) * Fixes an integer overflow in `OpLevelCostEstimator::CalculateOutputSize` ([CVE-2022-23576](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23576)) * Fixes a null dereference in `GetInitOp` ([CVE-2022-23577](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23577)) * Fixes a memory leak when a graph node is invalid ([CVE-2022-23578](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23578)) * Fixes an abort caused by allocating a vector that is too large ([CVE-2022-23580](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23580)) * Fixes multiple `CHECK`-failures during Grappler's `IsSimplifiableReshape` ([CVE-2022-23581](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23581)) * Fixes multiple `CHECK`-failures during Grappler's `SafeToRemoveIdentity` ([CVE-2022-23579](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23579)) * Fixes multiple `CHECK`-failures in `TensorByteSize` ([CVE-2022-23582](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23582)) * Fixes multiple `CHECK`-failures in binary ops due to type confusion ([CVE-2022-23583](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23583)) * Fixes a use after free in `DecodePng` kernel ([CVE-2022-23584](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23584)) * Fixes a memory leak in decoding PNG images ([CVE-2022-23585](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23585)) * Fixes multiple `CHECK`-fails in `function.cc` ([CVE-2022-23586](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23586)) * Fixes multiple `CHECK`-fails due to attempting to build a reference tensor ([CVE-2022-23588](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23588)) * Fixes an integer overflow in Grappler cost estimation of crop and resize operation ([CVE-2022-23587](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23587)) * Fixes a null pointer dereference in Grappler's `IsConstant` ([CVE-2022-23589](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23589)) * Fixes a `CHECK` failure in constant folding ([CVE-2021-41197](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41197)) * Fixes a stack overflow due to self-recursive function in `GraphDef` ([CVE-2022-23591](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23591)) * Fixes a crash due to erroneous `StatusOr` ([CVE-2022-23590](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23590)) * Fixes multiple crashes and heap OOB accesses in TFG dialect (MLIR) ([CVE-2022-23594](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23594)) * Fixes a null pointer dereference in `BuildXlaCompilationCache` (XLA) ([CVE-2022-23595](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23595)) * Updates `icu` to `69.1` to handle [CVE-2020-10531](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-10531) ``` ### 2.7.0 ``` Breaking Changes * `tf.keras`: * The methods `Model.fit()`, `Model.predict()`, and `Model.evaluate()` will no longer uprank input data of shape `(batch_size,)` to become `(batch_size, 1)`. This enables `Model` subclasses to process scalar data in their `train_step()`/`test_step()`/`predict_step()` methods. \ Note that this change may break certain subclassed models. You can revert back to the previous behavior by adding upranking yourself in the `train_step()`/`test_step()`/`predict_step()` methods, e.g. `if x.shape.rank == 1: x = tf.expand_dims(x, axis=-1)`. Functional models as well as Sequential models built with an explicit input shape are not affected. * The methods `Model.to_yaml()` and `keras.models.model_from_yaml` have been replaced to raise a `RuntimeError` as they can be abused to cause arbitrary code execution. It is recommended to use JSON serialization instead of YAML, or, a better alternative, serialize to H5. * `LinearModel` and `WideDeepModel` are moved to the `tf.compat.v1.keras.models.` namespace (`tf.compat.v1.keras.models.LinearModel` and `tf.compat.v1.keras.models.WideDeepModel`), and their `experimental` endpoints (`tf.keras.experimental.models.LinearModel` and `tf.keras.experimental.models.WideDeepModel`) are being deprecated. * RNG behavior change for all `tf.keras.initializers` classes. For any class constructed with a fixed seed, it will no longer generate same value when invoked multiple times. Instead, it will return different value, but a deterministic sequence. This change will make the initialize behavior align between v1 and v2. * `tf.lite`: * Rename fields `SignatureDef` table in schema to maximize the parity with TF SavedModel's Signature concept. * Deprecate Makefile builds. Makefile users need to migrate their builds to CMake or Bazel. Please refer to the [Build TensorFlow Lite with CMake](https://www.tensorflow.org/lite/guide/build_cmake) and [Build TensorFlow Lite for ARM boards](https://www.tensorflow.org/lite/guide/build_arm) for the migration. * Deprecate `tflite::OpResolver::GetDelegates`. The list returned by TfLite's `BuiltinOpResolver::GetDelegates` is now always empty. Instead, recommend using new method `tflite::OpResolver::GetDelegateCreators` in order to achieve lazy initialization on TfLite delegate instances. * TF Core: * `tf.Graph.get_name_scope()` now always returns a string, as documented. Previously, when called within `name_scope("")` or `name_scope(None)` contexts, it returned `None`; now it returns the empty string. * `te