donkirkby / zero-play

Teach a computer to play any game.
https://donkirkby.github.io/zero-play/
MIT License
10 stars 1 forks source link

Bump tensorflow from 2.4.2 to 2.6.0 #68

Closed dependabot[bot] closed 3 years ago

dependabot[bot] commented 3 years ago

Bumps tensorflow from 2.4.2 to 2.6.0.

Release notes

Sourced from tensorflow's releases.

TensorFlow 2.6.0

Release 2.6.0

Breaking Changes

  • tf.train.experimental.enable_mixed_precision_graph_rewrite is removed, as the API only works in graph mode and is not customizable. The function is still accessible under tf.compat.v1.mixed_precision.enable_mixed_precision_graph_rewrite, but it is recommended to use the Keras mixed precision API instead.

  • tf.lite:

    • Remove experimental.nn.dynamic_rnn, experimental.nn.TfLiteRNNCell and experimental.nn.TfLiteLSTMCell since they're no longersupported. It's recommended to just use keras lstm instead.
  • tf.keras:

    • Keras been split into a separate PIP package (keras), and its code has been moved to the GitHub repositorykeras-team/keras. The API endpoints for tf.keras stay unchanged, but are now backed by the keras PIP package. The existing code in tensorflow/python/keras is a staled copy and will be removed in future release (2.7). Please remove any imports to tensorflow.python.keras and replace them with public tf.keras API instead.
    • The methods Model.to_yaml() and keras.models.model_from_yaml have been replaced to raise a RuntimeError as they can be abused to cause arbitrary code execution. It is recommended to use JSON serialization instead of YAML, or, a better alternative, serialize to H5.

Known Caveats

  • TF Core:
    • A longstanding bug in tf.while_loop, which caused it to execute sequentially, even when parallel_iterations>1, has now been fixed. However, the increased parallelism may result in increased memory use. Users who experience unwanted regressions should reset their while_loop's parallel_iterations value to 1, which is consistent with prior behavior.

Major Features and Improvements

  • tf.keras:

    • Keras has been split into a separate PIP package (keras), and its code has been moved to the GitHub repository keras-team/keras. The API endpoints for tf.keras stay unchanged, but are now backed by the keras PIP package. All Keras-related PRs and issues should now be directed to the GitHub repository keras-team/keras.
    • tf.keras.utils.experimental.DatasetCreator now takes an optional tf.distribute.InputOptions for specific options when used with distribution.
    • tf.keras.experimental.SidecarEvaluator is now available for a program intended to be run on an evaluator task, which is commonly used to supplement a training cluster running with tf.distribute.experimental.ParameterServerStrategy (see `https://www.tensorflow.org/tutorials/distribute/parameter_server_training). It can also be used with single-worker training or other strategies. See docstring for more info.
    • Preprocessing layers moved from experimental to core.
      • Import paths moved from tf.keras.layers.preprocessing.experimental to tf.keras.layers.
    • Updates to Preprocessing layers API for consistency and clarity:
      • StringLookup and IntegerLookup default for mask_token changed to None. This matches the default masking behavior of Hashing and Embedding layers. To keep existing behavior, pass mask_token="" during layer creation.
      • Renamed "binary" output mode to "multi_hot" for CategoryEncoding, StringLookup, IntegerLookup, and TextVectorization. Multi-hot encoding will no longer automatically uprank rank 1 inputs, so these layers can now multi-hot encode unbatched multi-dimensional samples.
      • Added a new output mode "one_hot" for CategoryEncoding, StringLookup, IntegerLookup, which will encode each element in an input batch individually, and automatically append a new output dimension if necessary. Use this mode on rank 1 inputs for the old "binary" behavior of one-hot encoding a batch of scalars.
      • Normalization will no longer automatically uprank rank 1 inputs, allowing normalization of unbatched multi-dimensional samples.
  • tf.lite:

    • The recommended Android NDK version for building TensorFlow Lite has been changed from r18b to r19c.
    • Supports int64 for mul.
    • Supports native variable builtin ops - ReadVariable, AssignVariable.
    • Converter:
      • Experimental support for variables in TFLite. To enable through conversion, users need to set experimental_enable_resource_variables on tf.lite.TFLiteConverter to True. Note: mutable variables is only available using from_saved_model in this release, support for other methods is coming soon.
      • Old Converter (TOCO) is getting removed from next release. It's been deprecated for few releases already.
  • tf.saved_model:

    • SavedModels can now save custom gradients. Use the option tf.saved_model.SaveOption(experimental_custom_gradients=True) to enable this feature. The documentation in Advanced autodiff has been updated.
    • Object metadata has now been deprecated and no longer saved to the SavedModel.
  • TF Core:

    • Added tf.config.experimental.reset_memory_stats to reset the tracked peak memory returned by tf.config.experimental.get_memory_info.
  • tf.data:

    • Added target_workers param to data_service_ops.from_dataset_id and data_service_ops.distribute. Users can specify "AUTO", "ANY", or "LOCAL" (case insensitive). If "AUTO", tf.data service runtime decides which workers to read from. If "ANY", TF workers read from any tf.data service workers. If "LOCAL", TF workers will only read from local in-processs tf.data service workers. "AUTO" works well for most cases, while users can specify other targets. For example, "LOCAL" would help avoid RPCs and data copy if every TF worker colocates with a tf.data service worker. Currently, "AUTO" reads from any tf.data service workers to preserve existing behavior. The default value is "AUTO".

... (truncated)

Changelog

Sourced from tensorflow's changelog.

Release 2.6.0

Breaking Changes

  • tf.train.experimental.enable_mixed_precision_graph_rewrite is removed, as the API only works in graph mode and is not customizable. The function is still accessible under tf.compat.v1.mixed_precision.enable_mixed_precision_graph_rewrite, but it is recommended to use the Keras mixed precision API instead.

  • tf.lite:

    • Remove experimental.nn.dynamic_rnn, experimental.nn.TfLiteRNNCell and experimental.nn.TfLiteLSTMCell since they're no longer supported. It's recommended to just use keras lstm instead.
  • Keras been split into a separate PIP package (keras), and its code has been moved to the GitHub repositorykeras-team/keras. The API endpoints for tf.keras stay unchanged, but are now backed by the keras PIP package. The existing code in tensorflow/python/keras is a staled copy and will be removed in future release (2.7). Please remove any imports to tensorflow.python.keras and replace them with public tf.keras API instead.

  • Modular File System Migration

    • S3 and HDFS file system supports have been migrated to modular file systems and is now available in https://github.com/tensorflow/io. The tensorflow-io python package should be installed for S3 and HDFS support with tensorflow.

* *

Known Caveats

* * *

  • TF Core:
    • A longstanding bug in tf.while_loop, which caused it to execute sequentially, even when parallel_iterations>1, has now been fixed. However, the increased parallelism may result in increased memory use.

... (truncated)

Commits
  • 919f693 Merge pull request #51398 from tensorflow-jenkins/version-numbers-2.6.0-30580
  • 9752e10 Update version numbers to 2.6.0
  • 421ef70 Merge pull request #51397 from tensorflow/update-version-numbers
  • 662740b Update keras and estimator deps
  • 093800c Merge pull request #51396 from bmd3k/cherrypicks_4ENL2
  • baa3136 Update tensorboard dependency to 2.6.x and and tb-nightly dependency to 2.7.x.
  • 274c83b Merge pull request #51360 from tensorflow/mm-update-relnotes-on-r2.6
  • 6f80b7d Put CVE numbers for fixes in parentheses
  • 2743ff9 Update release notes with the security updates.
  • a10858d Merge pull request #51293 from tensorflow/mm-cherrypick-23d6383eb6c14084a8fc3...
  • Additional commits viewable in compare view


Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)