jd-aig / nlp_baai

NLP models and codes for BAAI-JD joint project.
252 stars 56 forks source link

Bump tensorflow from 2.1.0 to 2.5.0 in /jddc2020_baseline/mhred/tensorflow2 #27

Closed dependabot[bot] closed 3 years ago

dependabot[bot] commented 3 years ago

Bumps tensorflow from 2.1.0 to 2.5.0.

Release notes

Sourced from tensorflow's releases.

TensorFlow 2.5.0

Release 2.5.0

Major Features and Improvements

  • Support for Python3.9 has been added.
  • tf.data:
    • tf.data service now supports strict round-robin reads, which is useful for synchronous training workloads where example sizes vary. With strict round robin reads, users can guarantee that consumers get similar-sized examples in the same step.
    • tf.data service now supports optional compression. Previously data would always be compressed, but now you can disable compression by passing compression=None to tf.data.experimental.service.distribute(...).
    • tf.data.Dataset.batch() now supports num_parallel_calls and deterministic arguments. num_parallel_calls is used to indicate that multiple input batches should be computed in parallel. With num_parallel_calls set, deterministic is used to indicate that outputs can be obtained in the non-deterministic order.
    • Options returned by tf.data.Dataset.options() are no longer mutable.
    • tf.data input pipelines can now be executed in debug mode, which disables any asynchrony, parallelism, or non-determinism and forces Python execution (as opposed to trace-compiled graph execution) of user-defined functions passed into transformations such as map. The debug mode can be enabled through tf.data.experimental.enable_debug_mode().
  • tf.lite
    • Enabled the new MLIR-based quantization backend by default
      • The new backend is used for 8 bits full integer post-training quantization
      • The new backend removes the redundant rescales and fixes some bugs (shared weight/bias, extremely small scales, etc)
      • Set experimental_new_quantizer in tf.lite.TFLiteConverter to False to disable this change
  • tf.keras
    • tf.keras.metrics.AUC now support logit predictions.
    • Enabled a new supported input type in Model.fit, tf.keras.utils.experimental.DatasetCreator, which takes a callable, dataset_fn. DatasetCreator is intended to work across all tf.distribute strategies, and is the only input type supported for Parameter Server strategy.
  • tf.distribute
    • tf.distribute.experimental.ParameterServerStrategy now supports training with Keras Model.fit when used with DatasetCreator.
    • Creating tf.random.Generator under tf.distribute.Strategy scopes is now allowed (except for tf.distribute.experimental.CentralStorageStrategy and tf.distribute.experimental.ParameterServerStrategy). Different replicas will get different random-number streams.
  • TPU embedding support
    • Added profile_data_directory to EmbeddingConfigSpec in _tpu_estimator_embedding.py. This allows embedding lookup statistics gathered at runtime to be used in embedding layer partitioning decisions.
  • PluggableDevice
  • oneAPI Deep Neural Network Library (oneDNN) CPU performance optimizations from Intel-optimized TensorFlow are now available in the official x86-64 Linux and Windows builds.
    • They are off by default. Enable them by setting the environment variable TF_ENABLE_ONEDNN_OPTS=1.
    • We do not recommend using them in GPU systems, as they have not been sufficiently tested with GPUs yet.
  • TensorFlow pip packages are now built with CUDA11.2 and cuDNN 8.1.0

Breaking Changes

  • The TF_CPP_MIN_VLOG_LEVEL environment variable has been renamed to to TF_CPP_MAX_VLOG_LEVEL which correctly describes its effect.

Bug Fixes and Other Changes

  • tf.keras:
    • Preprocessing layers API consistency changes:
      • StringLookup added output_mode, sparse, and pad_to_max_tokens arguments with same semantics as TextVectorization.
      • IntegerLookup added output_mode, sparse, and pad_to_max_tokens arguments with same semantics as TextVectorization. Renamed max_values, oov_value and mask_value to max_tokens, oov_token and mask_token to align with StringLookup and TextVectorization.
      • TextVectorization default for pad_to_max_tokens switched to False.
      • CategoryEncoding no longer supports adapt, IntegerLookup now supports equivalent functionality. max_tokens argument renamed to num_tokens.
      • Discretization added num_bins argument for learning bins boundaries through calling adapt on a dataset. Renamed bins argument to bin_boundaries for specifying bins without adapt.
    • Improvements to model saving/loading:
      • model.load_weights now accepts paths to saved models.

... (truncated)

Changelog

Sourced from tensorflow's changelog.

Release 2.5.0

Breaking Changes

  • The TF_CPP_MIN_VLOG_LEVEL environment variable has been renamed to to TF_CPP_MAX_VLOG_LEVEL which correctly describes its effect.

Known Caveats

Major Features and Improvements

  • TPU embedding support

    • Added profile_data_directory to EmbeddingConfigSpec in _tpu_estimator_embedding.py. This allows embedding lookup statistics gathered at runtime to be used in embedding layer partitioning decisions.
  • tf.keras.metrics.AUC now support logit predictions.

  • Creating tf.random.Generator under tf.distribute.Strategy scopes is now allowed (except for tf.distribute.experimental.CentralStorageStrategy and tf.distribute.experimental.ParameterServerStrategy). Different replicas will get different random-number streams.

  • tf.data:

    • tf.data service now supports strict round-robin reads, which is useful for synchronous training workloads where example sizes vary. With strict round robin reads, users can guarantee that consumers get similar-sized examples in the same step.
    • tf.data service now supports optional compression. Previously data would always be compressed, but now you can disable compression by passing compression=None to tf.data.experimental.service.distribute(...).
    • tf.data.Dataset.batch() now supports num_parallel_calls and deterministic arguments. num_parallel_calls is used to indicate that multiple input batches should be computed in parallel. With num_parallel_calls set, deterministic is used to indicate that outputs can be obtained in the non-deterministic order.
    • Options returned by tf.data.Dataset.options() are no longer mutable.
    • tf.data input pipelines can now be executed in debug mode, which disables any asynchrony, parallelism, or non-determinism and forces Python execution (as opposed to trace-compiled graph execution) of user-defined functions passed into transformations such as map. The debug mode can be enabled through tf.data.experimental.enable_debug_mode().
  • tf.lite

    • Enabled the new MLIR-based quantization backend by default
      • The new backend is used for 8 bits full integer post-training quantization
      • The new backend removes the redundant rescales and fixes some bugs (shared weight/bias, extremely small scales, etc)

... (truncated)

Commits
  • a4dfb8d Merge pull request #49124 from tensorflow/mm-cherrypick-tf-data-segfault-fix-...
  • 2107b1d Merge pull request #49116 from tensorflow-jenkins/version-numbers-2.5.0-17609
  • 16b8139 Update snapshot_dataset_op.cc
  • 86a0d86 Merge pull request #49126 from geetachavan1/cherrypicks_X9ZNY
  • 9436ae6 Merge pull request #49128 from geetachavan1/cherrypicks_D73J5
  • 6b2bf99 Validate that a and b are proper sparse tensors
  • c03ad1a Ensure validation sticks in banded_triangular_solve_op
  • 12a6ead Merge pull request #49120 from geetachavan1/cherrypicks_KJ5M9
  • b67f5b8 Merge pull request #49118 from geetachavan1/cherrypicks_BIDTR
  • a13c0ad [tf.data][cherrypick] Fix snapshot segfault when using repeat and prefecth
  • Additional commits viewable in compare view


Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/jd-aig/nlp_baai/network/alerts).
dependabot[bot] commented 3 years ago

Superseded by #31.