ratschlab / GP-VAE

TensorFlow implementation for the GP-VAE model described in https://arxiv.org/abs/1907.04155
MIT License
124 stars 27 forks source link

Bump tensorflow-gpu from 1.15.0 to 2.1.0 #4

Closed dependabot[bot] closed 3 years ago

dependabot[bot] commented 4 years ago

Bumps tensorflow-gpu from 1.15.0 to 2.1.0.

Release notes *Sourced from [tensorflow-gpu's releases](https://github.com/tensorflow/tensorflow/releases).* > ## TensorFlow 2.1.0 > # Release 2.1.0 > > TensorFlow 2.1 will be the last TF release supporting Python 2. Python 2 support [officially ends an January 1, 2020](https://www.python.org/dev/peps/pep-0373/#update). [As announced earlier](https://groups.google.com/a/tensorflow.org/d/msg/announce/gVwS5RC8mds/dCt1ka2XAAAJ), TensorFlow will also stop supporting Python 2 starting January 1, 2020, and no more releases are expected in 2019. > > ## Major Features and Improvements > * The `tensorflow` pip package now includes GPU support by default (same as `tensorflow-gpu`) for both Linux and Windows. This runs on machines with and without NVIDIA GPUs. `tensorflow-gpu` is still available, and CPU-only packages can be downloaded at `tensorflow-cpu` for users who are concerned about package size. > * **Windows users:** Officially-released `tensorflow` Pip packages are now built with Visual Studio 2019 version 16.4 in order to take advantage of the new `/d2ReducedOptimizeHugeFunctions` compiler flag. To use these new packages, you must install "Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019", available from Microsoft's website [here](https://support.microsoft.com/help/2977003/the-latest-supported-visual-c-downloads). > * This does not change the minimum required version for building TensorFlow from source on Windows, but builds enabling `EIGEN_STRONG_INLINE` can take over 48 hours to compile without this flag. Refer to `configure.py` for more information about `EIGEN_STRONG_INLINE` and `/d2ReducedOptimizeHugeFunctions`. > * If either of the required DLLs, `msvcp140.dll` (old) or `msvcp140_1.dll` (new), are missing on your machine, `import tensorflow` will print a warning message. > * The `tensorflow` pip package is built with CUDA 10.1 and cuDNN 7.6. > * `tf.keras` > * Experimental support for mixed precision is available on GPUs and Cloud TPUs. See [usage guide](https://www.tensorflow.org/guide/keras/mixed_precision). > * Introduced the `TextVectorization` layer, which takes as input raw strings and takes care of text standardization, tokenization, n-gram generation, and vocabulary indexing. See this [end-to-end text classification example](https://colab.research.google.com/drive/1RvCnR7h0_l4Ekn5vINWToI9TNJdpUZB3). > * Keras `.compile` `.fit` `.evaluate` and `.predict` are allowed to be outside of the DistributionStrategy scope, as long as the model was constructed inside of a scope. > * Experimental support for Keras `.compile`, `.fit`, `.evaluate`, and `.predict` is available for Cloud TPUs, Cloud TPU, for all types of Keras models (sequential, functional and subclassing models). > * Automatic outside compilation is now enabled for Cloud TPUs. This allows `tf.summary` to be used more conveniently with Cloud TPUs. > * Dynamic batch sizes with DistributionStrategy and Keras are supported on Cloud TPUs. > * Support for `.fit`, `.evaluate`, `.predict` on TPU using numpy data, in addition to `tf.data.Dataset`. > * Keras reference implementations for many popular models are available in the TensorFlow [Model Garden](https://github.com/tensorflow/models/tree/master/official). > * `tf.data` > * Changes rebatching for `tf.data datasets` + DistributionStrategy for better performance. Note that the dataset also behaves slightly differently, in that the rebatched dataset cardinality will always be a multiple of the number of replicas. > * `tf.data.Dataset` now supports automatic data distribution and sharding in distributed environments, including on TPU pods. > * Distribution policies for `tf.data.Dataset` can now be tuned with 1. `tf.data.experimental.AutoShardPolicy(OFF, AUTO, FILE, DATA)` 2. `tf.data.experimental.ExternalStatePolicy(WARN, IGNORE, FAIL)` > * `tf.debugging` > * Add `tf.debugging.enable_check_numerics()` and `tf.debugging.disable_check_numerics()` to help debugging the root causes of issues involving infinities and `NaN`s. > * `tf.distribute` > * Custom training loop support on TPUs and TPU pods is avaiable through `strategy.experimental_distribute_dataset`, `strategy.experimental_distribute_datasets_from_function`, `strategy.experimental_run_v2`, `strategy.reduce`. > * Support for a global distribution strategy through `tf.distribute.experimental_set_strategy(),` in addition to `strategy.scope()`. > * `TensorRT` > * [TensorRT 6.0](https://developer.nvidia.com/tensorrt#tensorrt-whats-new) is now supported and enabled by default. This adds support for more TensorFlow ops including Conv3D, Conv3DBackpropInputV2, AvgPool3D, MaxPool3D, ResizeBilinear, and ResizeNearestNeighbor. In addition, the TensorFlow-TensorRT python conversion API is exported as `tf.experimental.tensorrt.Converter`. > * Environment variable `TF_DETERMINISTIC_OPS` has been added. When set to "true" or "1", this environment variable makes `tf.nn.bias_add` operate deterministically (i.e. reproducibly), but currently only when XLA JIT compilation is *not* enabled. Setting `TF_DETERMINISTIC_OPS` to "true" or "1" also makes cuDNN convolution and max-pooling operate deterministically. This makes Keras Conv\*D and MaxPool\*D layers operate deterministically in both the forward and backward directions when running on a CUDA-enabled GPU. > > ## Breaking Changes > * Deletes `Operation.traceback_with_start_lines` for which we know of no usages. > * Removed `id` from `tf.Tensor.__repr__()` as `id` is not useful other than internal debugging. > * Some `tf.assert_*` methods now raise assertions at operation creation time if the input tensors' values are known at that time, not during the `session.run()`. This only changes behavior when the graph execution would have resulted in an error. When this happens, a noop is returned and the input tensors are marked non-feedable. In other words, if they are used as keys in `feed_dict` argument to `session.run()`, an error will be raised. Also, because some assert ops don't make it into the graph, the graph structure changes. A different graph can result in different per-op random seeds when they are not given explicitly (most often). > * The following APIs are not longer experimental: `tf.config.list_logical_devices`, `tf.config.list_physical_devices`, `tf.config.get_visible_devices`, `tf.config.set_visible_devices`, `tf.config.get_logical_device_configuration`, `tf.config.set_logical_device_configuration`. > * `tf.config.experimentalVirtualDeviceConfiguration` has been renamed to `tf.config.LogicalDeviceConfiguration`. > * `tf.config.experimental_list_devices` has been removed, please use > `tf.config.list_logical_devices`. > > ## Bug Fixes and Other Changes > * `tf.data` > * Fixes concurrency issue with `tf.data.experimental.parallel_interleave` with `sloppy=True`. > * Add `tf.data.experimental.dense_to_ragged_batch()`. > * Extend `tf.data` parsing ops to support `RaggedTensors`. > * `tf.distribute` > * Fix issue where GRU would crash or give incorrect output when a `tf.distribute.Strategy` was used. > * `tf.estimator` > ... (truncated)
Changelog *Sourced from [tensorflow-gpu's changelog](https://github.com/tensorflow/tensorflow/blob/master/RELEASE.md).* > # Release 2.1.0 > > TensorFlow 2.1 will be the last TF release supporting Python 2. Python 2 support [officially ends an January 1, 2020](https://www.python.org/dev/peps/pep-0373/#update). [As announced earlier](https://groups.google.com/a/tensorflow.org/d/msg/announce/gVwS5RC8mds/dCt1ka2XAAAJ), TensorFlow will also stop supporting Python 2 starting January 1, 2020, and no more releases are expected in 2019. > > ## Major Features and Improvements > * The `tensorflow` pip package now includes GPU support by default (same as `tensorflow-gpu`) for both Linux and Windows. This runs on machines with and without NVIDIA GPUs. `tensorflow-gpu` is still available, and CPU-only packages can be downloaded at `tensorflow-cpu` for users who are concerned about package size. > * **Windows users:** Officially-released `tensorflow` Pip packages are now built with Visual Studio 2019 version 16.4 in order to take advantage of the new `/d2ReducedOptimizeHugeFunctions` compiler flag. To use these new packages, you must install "Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019", available from Microsoft's website [here](https://support.microsoft.com/help/2977003/the-latest-supported-visual-c-downloads). > * This does not change the minimum required version for building TensorFlow from source on Windows, but builds enabling `EIGEN_STRONG_INLINE` can take over 48 hours to compile without this flag. Refer to `configure.py` for more information about `EIGEN_STRONG_INLINE` and `/d2ReducedOptimizeHugeFunctions`. > * If either of the required DLLs, `msvcp140.dll` (old) or `msvcp140_1.dll` (new), are missing on your machine, `import tensorflow` will print a warning message. > * The `tensorflow` pip package is built with CUDA 10.1 and cuDNN 7.6. > * `tf.keras` > * Experimental support for mixed precision is available on GPUs and Cloud TPUs. See [usage guide](https://www.tensorflow.org/guide/keras/mixed_precision). > * Introduced the `TextVectorization` layer, which takes as input raw strings and takes care of text standardization, tokenization, n-gram generation, and vocabulary indexing. See this [end-to-end text classification example](https://colab.research.google.com/drive/1RvCnR7h0_l4Ekn5vINWToI9TNJdpUZB3). > * Keras `.compile` `.fit` `.evaluate` and `.predict` are allowed to be outside of the DistributionStrategy scope, as long as the model was constructed inside of a scope. > * Experimental support for Keras `.compile`, `.fit`, `.evaluate`, and `.predict` is available for Cloud TPUs, Cloud TPU, for all types of Keras models (sequential, functional and subclassing models). > * Automatic outside compilation is now enabled for Cloud TPUs. This allows `tf.summary` to be used more conveniently with Cloud TPUs. > * Dynamic batch sizes with DistributionStrategy and Keras are supported on Cloud TPUs. > * Support for `.fit`, `.evaluate`, `.predict` on TPU using numpy data, in addition to `tf.data.Dataset`. > * Keras reference implementations for many popular models are available in the TensorFlow [Model Garden](https://github.com/tensorflow/models/tree/master/official). > * `tf.data` > * Changes rebatching for `tf.data datasets` + DistributionStrategy for better performance. Note that the dataset also behaves slightly differently, in that the rebatched dataset cardinality will always be a multiple of the number of replicas. > * `tf.data.Dataset` now supports automatic data distribution and sharding in distributed environments, including on TPU pods. > * Distribution policies for `tf.data.Dataset` can now be tuned with 1. `tf.data.experimental.AutoShardPolicy(OFF, AUTO, FILE, DATA)` 2. `tf.data.experimental.ExternalStatePolicy(WARN, IGNORE, FAIL)` > * `tf.debugging` > * Add `tf.debugging.enable_check_numerics()` and `tf.debugging.disable_check_numerics()` to help debugging the root causes of issues involving infinities and `NaN`s. > * `tf.distribute` > * Custom training loop support on TPUs and TPU pods is avaiable through `strategy.experimental_distribute_dataset`, `strategy.experimental_distribute_datasets_from_function`, `strategy.experimental_run_v2`, `strategy.reduce`. > * Support for a global distribution strategy through `tf.distribute.experimental_set_strategy(),` in addition to `strategy.scope()`. > * `TensorRT` > * [TensorRT 6.0](https://developer.nvidia.com/tensorrt#tensorrt-whats-new) is now supported and enabled by default. This adds support for more TensorFlow ops including Conv3D, Conv3DBackpropInputV2, AvgPool3D, MaxPool3D, ResizeBilinear, and ResizeNearestNeighbor. In addition, the TensorFlow-TensorRT python conversion API is exported as `tf.experimental.tensorrt.Converter`. > * Environment variable `TF_DETERMINISTIC_OPS` has been added. When set to "true" or "1", this environment variable makes `tf.nn.bias_add` operate deterministically (i.e. reproducibly), but currently only when XLA JIT compilation is *not* enabled. Setting `TF_DETERMINISTIC_OPS` to "true" or "1" also makes cuDNN convolution and max-pooling operate deterministically. This makes Keras Conv\*D and MaxPool\*D layers operate deterministically in both the forward and backward directions when running on a CUDA-enabled GPU. > > ## Breaking Changes > * Deletes `Operation.traceback_with_start_lines` for which we know of no usages. > * Removed `id` from `tf.Tensor.__repr__()` as `id` is not useful other than internal debugging. > * Some `tf.assert_*` methods now raise assertions at operation creation time if the input tensors' values are known at that time, not during the `session.run()`. This only changes behavior when the graph execution would have resulted in an error. When this happens, a noop is returned and the input tensors are marked non-feedable. In other words, if they are used as keys in `feed_dict` argument to `session.run()`, an error will be raised. Also, because some assert ops don't make it into the graph, the graph structure changes. A different graph can result in different per-op random seeds when they are not given explicitly (most often). > * The following APIs are not longer experimental: `tf.config.list_logical_devices`, `tf.config.list_physical_devices`, `tf.config.get_visible_devices`, `tf.config.set_visible_devices`, `tf.config.get_logical_device_configuration`, `tf.config.set_logical_device_configuration`. > * `tf.config.experimentalVirtualDeviceConfiguration` has been renamed to `tf.config.LogicalDeviceConfiguration`. > * `tf.config.experimental_list_devices` has been removed, please use > `tf.config.list_logical_devices`. > > ## Bug Fixes and Other Changes > * `tf.data` > * Fixes concurrency issue with `tf.data.experimental.parallel_interleave` with `sloppy=True`. > * Add `tf.data.experimental.dense_to_ragged_batch()`. > * Extend `tf.data` parsing ops to support `RaggedTensors`. > * `tf.distribute` > * Fix issue where GRU would crash or give incorrect output when a `tf.distribute.Strategy` was used. > * `tf.estimator` > * Added option in `tf.estimator.CheckpointSaverHook` to not save the `GraphDef`. > ... (truncated)
Commits - [`e5bf8de`](https://github.com/tensorflow/tensorflow/commit/e5bf8de410005de06a7ff5393fafdf832ef1d4ad) Merge pull request [#35620](https://github-redirect.dependabot.com/tensorflow/tensorflow/issues/35620) from tensorflow-jenkins/version-numbers-2.1.0-23859 - [`d43e3c7`](https://github.com/tensorflow/tensorflow/commit/d43e3c70d5cf72a89b5b07df6253f3fe01514439) Update version numbers to 2.1.0 - [`bd56e04`](https://github.com/tensorflow/tensorflow/commit/bd56e040ba4a8272163357a2f8786a128deb4aaf) Merge pull request [#35548](https://github-redirect.dependabot.com/tensorflow/tensorflow/issues/35548) from compnerd/r2.1-windows-build - [`8427475`](https://github.com/tensorflow/tensorflow/commit/842747595bb3ba340b64ed62226c874659937102) Merge pull request [#35567](https://github-redirect.dependabot.com/tensorflow/tensorflow/issues/35567) from Intel-tensorflow/tf-2.1-klocwork-cherry-pick - [`5efe38d`](https://github.com/tensorflow/tensorflow/commit/5efe38d81cae6a8724fdf76f55f1b98ed747d452) Merge pull request [#35566](https://github-redirect.dependabot.com/tensorflow/tensorflow/issues/35566) from tensorflow/mm-r21-macos-py3 - [`3ed1f02`](https://github.com/tensorflow/tensorflow/commit/3ed1f0218eeddfbfecb021b2e9f585da860420f1) Cherry-picking 55e20a6 - klockwork fix - [`5b7addf`](https://github.com/tensorflow/tensorflow/commit/5b7addfa22315b1d66d604b6d7ced5a324622397) Add Python3.7 testing on MacOS as we drop support for Python2. - [`a8adad9`](https://github.com/tensorflow/tensorflow/commit/a8adad90ac581d99d2a1ab2517602fda7649d6cf) configure.py: add `-D_USE_MATH_DEFINES` manually - [`9badef2`](https://github.com/tensorflow/tensorflow/commit/9badef2f3f124508dfd0679f67851c27b9a7bcb8) Define _USE_MATH_DEFINES for windows builds. - [`4c07219`](https://github.com/tensorflow/tensorflow/commit/4c0721928b949ea67ea3f47650c7a65afe9611c5) [XLA] Remove use of designated initializers in dynamic dimension inference. - Additional commits viewable in [compare view](https://github.com/tensorflow/tensorflow/compare/v1.15.0...v2.1.0)


Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot ignore this [patch|minor|major] version` will close this PR and stop Dependabot creating any more for this minor/major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/ratschlab/GP-VAE/network/alerts).
dependabot[bot] commented 3 years ago

Superseded by #6.