KDD-OpenSource / DeepADoTS

Repository of the paper "A Systematic Evaluation of Deep Anomaly Detection Methods for Time Series".
MIT License
569 stars 117 forks source link

Bump tensorflow from 1.13.0rc1 to 2.0.0 #197

Closed dependabot[bot] closed 4 years ago

dependabot[bot] commented 5 years ago

Bumps tensorflow from 1.13.0rc1 to 2.0.0.

Release notes *Sourced from [tensorflow's releases](https://github.com/tensorflow/tensorflow/releases).* > ## TensorFlow 2.0.0 > # Release 2.0.0 > > ## Major Features and Improvements > > TensorFlow 2.0 focuses on **simplicity** and **ease of use**, featuring updates like: > > * Easy model building with Keras and eager execution. > * Robust model deployment in production on any platform. > * Powerful experimentation for research. > * API simplification by reducing duplication and removing deprecated endpoints. > > For details on best practices with 2.0, see [the Effective 2.0 guide](https://www.tensorflow.org/beta/guide/effective_tf2) > > > For information on upgrading your existing TensorFlow 1.x models, please refer to our [Upgrade](https://medium.com/tensorflow/upgrading-your-code-to-tensorflow-2-0-f72c3a4d83b5) and [Migration](https://www.tensorflow.org/guide/migrate) guides. We have also released a collection of [tutorials and getting started guides](https://www.tensorflow.org/beta). > > ## Highlights > > * TF 2.0 delivers Keras as the central high level API used to build and train models. Keras provides several model-building APIs such as Sequential, Functional, and Subclassing along with eager execution, for immediate iteration and intuitive debugging, and `tf.data`, for building scalable input pipelines. Checkout [guide](https://www.tensorflow.org/beta/guide/keras/overview) for additional details. > * Distribution Strategy: TF 2.0 users will be able to use the [`tf.distribute.Strategy`](https://www.tensorflow.org/beta/guide/distribute_strategy) API to distribute training with minimal code changes, yielding great out-of-the-box performance. It supports distributed training with Keras model.fit, as well as with custom training loops. Multi-GPU support is available, along with experimental support for multi worker and Cloud TPUs. Check out the [guide](https://www.tensorflow.org/beta/guide/distribute_strategy) for more details. > * Functions, not Sessions. The traditional declarative programming model of building a graph and executing it via a `tf.Session` is discouraged, and replaced with by writing regular Python functions. Using the `tf.function` decorator, such functions can be turned into graphs which can be executed remotely, serialized, and optimized for performance. > * Unification of `tf.train.Optimizers` and `tf.keras.Optimizers`. Use `tf.keras.Optimizers` for TF2.0. `compute_gradients` is removed as public API, use `GradientTape` to compute gradients. > * AutoGraph translates Python control flow into TensorFlow expressions, allowing users to write regular Python inside `tf.function`-decorated functions. AutoGraph is also applied in functions used with tf.data, tf.distribute and tf.keras APIs. > * Unification of exchange formats to SavedModel. All TensorFlow ecosystem projects (TensorFlow Lite, TensorFlow JS, TensorFlow Serving, TensorFlow Hub) accept SavedModels. Model state should be saved to and restored from SavedModels. > * API Changes: Many API symbols have been renamed or removed, and argument names have changed. Many of these changes are motivated by consistency and clarity. The 1.x API remains available in the compat.v1 module. A list of all symbol changes can be found [here](https://docs.google.com/spreadsheets/d/1FLFJLzg7WNP6JHODX5q8BDgptKafq_slHpnHVbJIteQ/edit#gid=0). > * API clean-up, included removing `tf.app`, `tf.flags`, and `tf.logging` in favor of [absl-py](https://github.com/abseil/abseil-py). > * No more global variables with helper methods like `tf.global_variables_initializer` and `tf.get_global_step`. > * Add toggles `tf.enable_control_flow_v2()` and `tf.disable_control_flow_v2()` for enabling/disabling v2 control flow. > * Enable v2 control flow as part of `tf.enable_v2_behavior()` and `TF2_BEHAVIOR=1`. > * Fixes autocomplete for most TensorFlow API references by switching to use relative imports in API `__init__.py` files. > * Auto Mixed-Precision graph optimizer simplifies converting models to `float16` for acceleration on Volta and Turing Tensor Cores. This feature can be enabled by wrapping an optimizer class with `tf.train.experimental.enable_mixed_precision_graph_rewrite()`. > * Add environment variable `TF_CUDNN_DETERMINISTIC`. Setting to `TRUE` or "1" forces the selection of deterministic cuDNN convolution and max-pooling algorithms. When this is enabled, the algorithm selection procedure itself is also deterministic. > > ## Breaking Changes > * Many backwards incompatible API changes have been made to clean up the APIs and make them more consistent. > * Toolchains: > * TensorFlow 2.0.0 is built using devtoolset7 (GCC7) on Ubuntu 16. This may lead to ABI incompatibilities with extensions built against earlier versions of TensorFlow. > * Tensorflow code now produces 2 different pip packages: tensorflow_core containing all the code (in the future it will contain only the private implementation) and tensorflow which is a virtual pip package doing forwarding to tensorflow_core (and in the future will contain only the public API of tensorflow). We don't expect this to be breaking, unless you were importing directly from the implementation. > Removed the `freeze_graph` command line tool; `SavedModel` should be used in place of frozen graphs. > > * `tf.contrib`: > * `tf.contrib` has been deprecated, and functionality has been either migrated to the core TensorFlow API, to an ecosystem project such as [tensorflow/addons](https://www.github.com/tensorflow/addons) or [tensorflow/io](https://www.github.com/tensorflow/io), or removed entirely. > * Remove `tf.contrib.timeseries` dependency on TF distributions. > * Replace contrib references with `tf.estimator.experimental.*` for apis in `early_stopping.py`. > > * `tf.estimator`: > * Premade estimators in the tf.estimator.DNN/Linear/DNNLinearCombined family have been updated to use `tf.keras.optimizers` instead of the `tf.compat.v1.train.Optimizer`s. If you do not pass in an `optimizer=` arg or if you use a string, the premade estimator will use the Keras optimizer. This is checkpoint breaking, as the optimizers have separate variables. A checkpoint converter tool for converting optimizers is included with the release, but if you want to avoid any change, switch to the v1 version of the estimator: `tf.compat.v1.estimator.DNN/Linear/DNNLinearCombined*`. > * Default aggregation for canned Estimators is now `SUM_OVER_BATCH_SIZE`. To maintain previous default behavior, please pass `SUM` as the loss aggregation method. > * Canned Estimators don’t support `input_layer_partitioner` arg in the API. If you have this arg, you will have to switch to `tf.compat.v1 canned Estimators`. > ... (truncated)
Changelog *Sourced from [tensorflow's changelog](https://github.com/tensorflow/tensorflow/blob/master/RELEASE.md).* > # Release 2.0.0 > > ## Major Features and Improvements > > TensorFlow 2.0 focuses on **simplicity** and **ease of use**, featuring updates like: > > * Easy model building with Keras and eager execution. > * Robust model deployment in production on any platform. > * Powerful experimentation for research. > * API simplification by reducing duplication and removing deprecated endpoints. > > For details on best practices with 2.0, see [the Effective 2.0 guide](https://www.tensorflow.org/beta/guide/effective_tf2) > > > For information on upgrading your existing TensorFlow 1.x models, please refer to our [Upgrade](https://medium.com/tensorflow/upgrading-your-code-to-tensorflow-2-0-f72c3a4d83b5) and [Migration](https://www.tensorflow.org/beta/guide/migration_guide) guides. We have also released a collection of [tutorials and getting started guides](https://www.tensorflow.org/beta). > > ## Highlights > > * TF 2.0 delivers Keras as the central high level API used to build and train > models. Keras provides several model-building APIs such as Sequential, > Functional, and Subclassing along with eager execution, for immediate > iteration and intuitive debugging, and `tf.data`, for building scalable > input pipelines. Checkout > [guide](https://www.tensorflow.org/beta/guide/keras/overview) for additional > details. > * Distribution Strategy: TF 2.0 users will be able to use the > [`tf.distribute.Strategy`](https://www.tensorflow.org/beta/guide/distribute_strategy) > API to distribute training with minimal code changes, yielding great > out-of-the-box performance. It supports distributed training with Keras > model.fit, as well as with custom training loops. Multi-GPU support is > available, along with experimental support for multi worker and Cloud TPUs. > Check out the > [guide](https://www.tensorflow.org/beta/guide/distribute_strategy) for more > details. > * Functions, not Sessions. The traditional declarative programming model of > building a graph and executing it via a `tf.Session` is discouraged, and > replaced with by writing regular Python functions. Using the `tf.function` > decorator, such functions can be turned into graphs which can be executed > remotely, serialized, and optimized for performance. > * Unification of `tf.train.Optimizers` and `tf.keras.Optimizers`. Use > `tf.keras.Optimizers` for TF2.0. `compute_gradients` is removed as public > API, use `GradientTape` to compute gradients. > * AutoGraph translates Python control flow into TensorFlow expressions, > allowing users to write regular Python inside `tf.function`-decorated > functions. AutoGraph is also applied in functions used with tf.data, > tf.distribute and tf.keras APIs. > * Unification of exchange formats to SavedModel. All TensorFlow ecosystem > projects (TensorFlow Lite, TensorFlow JS, TensorFlow Serving, TensorFlow > Hub) accept SavedModels. Model state should be saved to and restored from > SavedModels. > ... (truncated)
Commits - [`64c3d38`](https://github.com/tensorflow/tensorflow/commit/64c3d382cadf7bbe8e7e99884bede8284ff67f56) Update RELEASE.md - [`2845767`](https://github.com/tensorflow/tensorflow/commit/2845767d913eb2e970c3039749f7333ab2fdebc0) Update RELEASE.md - [`3d230aa`](https://github.com/tensorflow/tensorflow/commit/3d230aaa1f5021c83143b8c6be8f49678c8a77db) Update release notes for tensorrt and mixed precision - [`b1c5361`](https://github.com/tensorflow/tensorflow/commit/b1c53619cf1709249df17bf0faf70a584a940885) Update RELEASE.md - [`5105437`](https://github.com/tensorflow/tensorflow/commit/51054374eaa2f478ddf17a2ed901daf4c65b1178) Update RELEASE.md - [`cf6180b`](https://github.com/tensorflow/tensorflow/commit/cf6180b8415870924cb278502785cc19d26ee7f4) Update RELEASE.md - [`ec8d660`](https://github.com/tensorflow/tensorflow/commit/ec8d660892eedba0f8cd5eb414769aab5dd95c77) Release Notes for 2.0.0-rc0 - [`ac24e9e`](https://github.com/tensorflow/tensorflow/commit/ac24e9eb3a369b9f09a10415ca06ecb1ac97d9fe) Merge pull request [#32861](https://github-redirect.dependabot.com/tensorflow/tensorflow/issues/32861) from guptapriya/cherrypicks_5NZHH - [`23a9413`](https://github.com/tensorflow/tensorflow/commit/23a94133f5033ee156c5e1cc58a6cb54ad1e8a6e) Mark tf.keras.utils.multi_gpu_model as deprecated. - [`1f372a0`](https://github.com/tensorflow/tensorflow/commit/1f372a0968f9f75d3ca54d0c3d2392c7c1eb316b) Merge pull request [#32742](https://github-redirect.dependabot.com/tensorflow/tensorflow/issues/32742) from rmlarsen/cherrypicks_BX1WK - Additional commits viewable in [compare view](https://github.com/tensorflow/tensorflow/compare/v1.13.0-rc1...v2.0.0)


Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot ignore this [patch|minor|major] version` will close this PR and stop Dependabot creating any more for this minor/major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/KDD-OpenSource/DeepADoTS/network/alerts).
dependabot[bot] commented 4 years ago

Superseded by #199.