uber-research / deep-neuroevolution

Deep Neuroevolution
Other
1.63k stars 301 forks source link

Bump tensorflow from 0.12.1 to 2.6.4 #52

Closed dependabot[bot] closed 2 years ago

dependabot[bot] commented 2 years ago

Bumps tensorflow from 0.12.1 to 2.6.4.

Release notes

Sourced from tensorflow's releases.

TensorFlow 2.6.4

Release 2.6.4

This releases introduces several vulnerability fixes:

TensorFlow 2.6.3

Release 2.6.3

This releases introduces several vulnerability fixes:

  • Fixes a floating point division by 0 when executing convolution operators (CVE-2022-21725)
  • Fixes a heap OOB read in shape inference for ReverseSequence (CVE-2022-21728)
  • Fixes a heap OOB access in Dequantize (CVE-2022-21726)
  • Fixes an integer overflow in shape inference for Dequantize (CVE-2022-21727)
  • Fixes a heap OOB access in FractionalAvgPoolGrad (CVE-2022-21730)
  • Fixes an overflow and divide by zero in UnravelIndex (CVE-2022-21729)
  • Fixes a type confusion in shape inference for ConcatV2 (CVE-2022-21731)
  • Fixes an OOM in ThreadPoolHandle (CVE-2022-21732)
  • Fixes an OOM due to integer overflow in StringNGrams (CVE-2022-21733)
  • Fixes more issues caused by incomplete validation in boosted trees code (CVE-2021-41208)
  • Fixes an integer overflows in most sparse component-wise ops (CVE-2022-23567)
  • Fixes an integer overflows in AddManySparseToTensorsMap (CVE-2022-23568)
  • Fixes a number of CHECK-failures in MapStage (CVE-2022-21734)

... (truncated)

Changelog

Sourced from tensorflow's changelog.

Release 2.6.4

This releases introduces several vulnerability fixes:

Release 2.8.0

Major Features and Improvements

  • tf.lite:

    • Added TFLite builtin op support for the following TF ops:
      • tf.raw_ops.Bucketize op on CPU.
      • tf.where op for data types tf.int32/tf.uint32/tf.int8/tf.uint8/tf.int64.
      • tf.random.normal op for output data type tf.float32 on CPU.
      • tf.random.uniform op for output data type tf.float32 on CPU.
      • tf.random.categorical op for output data type tf.int64 on CPU.
  • tensorflow.experimental.tensorrt:

    • conversion_params is now deprecated inside TrtGraphConverterV2 in favor of direct arguments: max_workspace_size_bytes, precision_mode, minimum_segment_size, maximum_cached_engines, use_calibration and

... (truncated)

Commits
  • 33ed2b1 Merge pull request #56102 from tensorflow/mihaimaruseac-patch-1
  • e1ec480 Fix build due to importlib-metadata/setuptools
  • 63f211c Merge pull request #56033 from tensorflow-jenkins/relnotes-2.6.4-6677
  • 22b8fe4 Update RELEASE.md
  • ec30684 Merge pull request #56070 from tensorflow/mm-cp-adafb45c781-on-r2.6
  • 38774ed Merge pull request #56060 from yongtang:curl-7.83.1
  • 9ef1604 Merge pull request #56036 from tensorflow-jenkins/version-numbers-2.6.4-9925
  • a6526a3 Update version numbers to 2.6.4
  • cb1a481 Update RELEASE.md
  • 4da550f Insert release notes place-fill
  • Additional commits viewable in compare view


Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/uber-research/deep-neuroevolution/network/alerts).
CLAassistant commented 2 years ago

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

dependabot[bot] commented 2 years ago

Superseded by #53.