Added TFLite builtin op support for the following TF ops:
tf.raw_ops.Bucketize op on CPU.
tf.where op for data types tf.int32/tf.uint32/tf.int8/tf.uint8/tf.int64.
tf.random.normal op for output data type tf.float32 on CPU.
tf.random.uniform op for output data type tf.float32 on CPU.
tf.random.categorical op for output data type tf.int64 on CPU.
tensorflow.experimental.tensorrt:
conversion_params is now deprecated inside TrtGraphConverterV2 in favor of direct arguments: max_workspace_size_bytes, precision_mode, minimum_segment_size, maximum_cached_engines, use_calibration and allow_build_at_runtime.
Added a new parameter called save_gpu_specific_engines to the .save() function inside TrtGraphConverterV2. When False, the .save() function won't save any TRT engines that have been built. When True (default), the original behavior is preserved.
TrtGraphConverterV2 provides a new API called .summary() which outputs a summary of the inference converted by TF-TRT. It namely shows each TRTEngineOp with their input(s)' and output(s)' shape and dtype. A detailed version of the summary is available which prints additionally all the TensorFlow OPs included in each of the TRTEngineOps.
tf.tpu.experimental.embedding:
tf.tpu.experimental.embedding.FeatureConfig now takes an additional argument output_shape which can specify the shape of the output activation for the feature.
tf.tpu.experimental.embedding.TPUEmbedding now has the same behavior as tf.tpu.experimental.embedding.serving_embedding_lookup which can take arbitrary rank of dense and sparse tensor. For ragged tensor, though the input tensor remains to be rank 2, the activations now can be rank 2 or above by specifying the output shape in the feature config or via the build method.
Add tf.config.experimental.enable_op_determinism, which makes TensorFlow ops run deterministically at the cost of performance. Replaces the TF_DETERMINISTIC_OPS environmental variable, which is now deprecated. The "Bug Fixes and Other Changes" section lists more determinism-related changes.
The optimization parallel_batch now becomes default if not disabled by users, which will parallelize copying of batch elements.
Added the ability for TensorSliceDataset to identify and handle inputs that are files. This enables creating hermetic SavedModels when using datasets created from files.
The optimization parallel_batch now becomes default if not disabled by users, which will parallelize copying of batch elements.
Added the ability for TensorSliceDataset to identify and handle inputs that are files. This enables creating hermetic SavedModels when using datasets created from files.
tf.lite:
Adds GPU Delegation support for serialization to Java API. This boosts initialization time up to 90% when OpenCL is available.
Deprecated Interpreter::SetNumThreads, in favor of InterpreterBuilder::SetNumThreads.
tf.keras:
Adds tf.compat.v1.keras.utils.get_or_create_layer to aid migration to TF2 by enabling tracking of nested keras models created in TF1-style, when used with the tf.compat.v1.keras.utils.track_tf1_style_variables decorator.
Added a tf.keras.layers.experimental.preprocessing.HashedCrossing layer which applies the hashing trick to the concatenation of crossed scalar inputs. This provides a stateless way to try adding feature crosses of integer or string data to a model.
Removed keras.layers.experimental.preprocessing.CategoryCrossing. Users should migrate to the HashedCrossing layer or use tf.sparse.cross/tf.ragged.cross directly.
Added additional standardize and split modes to TextVectorization:
standardize="lower" will lowercase inputs.
standardize="string_punctuation" will remove all puncuation.
split="character" will split on every unicode character.
Added an output_mode argument to the Discretization and Hashing layers with the same semantics as other preprocessing layers. All categorical preprocessing layers now support output_mode.
All preprocessing layer output will follow the compute dtype of a tf.keras.mixed_precision.Policy, unless constructed with output_mode="int" in which case output will be tf.int64. The output type of any preprocessing layer can be controlled individually by passing a dtype argument to the layer.
tf.random.Generator for keras initializers and all RNG code.
Added 3 new APIs for enable/disable/check the usage of tf.random.Generator in keras backend, which will be the new backend for all the RNG in Keras. We plan to switch on the new code path by default in tf 2.8, and the behavior change will likely to cause some breakage on user side (eg if the test is checking against some golden nubmer). These 3 APIs will allow user to disable and switch back to legacy behavior if they prefer. In future (eg TF 2.10), we expect to totally remove the legacy code path (stateful random Ops), and these 3 APIs will be removed as well.
Added TFLite builtin op support for the following TF ops:
tf.raw_ops.Bucketize op on CPU.
tf.where op for data types
tf.int32/tf.uint32/tf.int8/tf.uint8/tf.int64.
tf.random.normal op for output data type tf.float32 on CPU.
tf.random.uniform op for output data type tf.float32 on CPU.
tf.random.categorical op for output data type tf.int64 on CPU.
tensorflow.experimental.tensorrt:
conversion_params is now deprecated inside TrtGraphConverterV2 in
favor of direct arguments: max_workspace_size_bytes, precision_mode,
minimum_segment_size, maximum_cached_engines, use_calibration and
allow_build_at_runtime.
Added a new parameter called save_gpu_specific_engines to the
.save() function inside TrtGraphConverterV2. When False, the
.save() function won't save any TRT engines that have been built. When
True (default), the original behavior is preserved.
TrtGraphConverterV2 provides a new API called .summary() which
outputs a summary of the inference converted by TF-TRT. It namely shows
each TRTEngineOp with their input(s)' and output(s)' shape and dtype.
A detailed version of the summary is available which prints additionally
all the TensorFlow OPs included in each of the TRTEngineOps.
tf.tpu.experimental.embedding:
tf.tpu.experimental.embedding.FeatureConfig now takes an additional
argument output_shape which can specify the shape of the output
activation for the feature.
tf.tpu.experimental.embedding.TPUEmbedding now has the same behavior
as tf.tpu.experimental.embedding.serving_embedding_lookup which can
take arbitrary rank of dense and sparse tensor. For ragged tensor,
though the input tensor remains to be rank 2, the activations now can be
rank 2 or above by specifying the output shape in the feature config or
via the build method.
Add
tf.config.experimental.enable_op_determinism,
which makes TensorFlow ops run deterministically at the cost of performance.
Replaces the TF_DETERMINISTIC_OPS environmental variable, which is now
deprecated. The "Bug Fixes and Other Changes" section lists more
determinism-related changes.
(Since TF 2.7) Add
... (truncated)
Commits
3f878cf Merge pull request #54226 from tensorflow-jenkins/version-numbers-2.8.0-22199
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/quantylab/rltrader/network/alerts).
Bumps tensorflow from 2.7.0 to 2.8.0.
Release notes
Sourced from tensorflow's releases.
... (truncated)
Changelog
Sourced from tensorflow's changelog.
... (truncated)
Commits
3f878cf
Merge pull request #54226 from tensorflow-jenkins/version-numbers-2.8.0-2219954307e6
Update version numbers to 2.8.02f2bdd2
Merge pull request #54193 from tensorflow/update-release-notes97e2f16
Update release notes with security advisories/updates93e224e
Merge pull request #54182 from tensorflow/cherrypick-93323537ac0581a88af827af...14defd0
Bump ICU to 69.1 to handle CVE-2020-10531.0a20763
Merge pull request #54159 from tensorflow/cherrypick-b1756cf206fc4db86f05c420...b7ecb36
Bump the maximum threshold before erroringe542736
Merge pull request #54123 from terryheo/windows-fix-r2.88dd07bd
lite: Update Windows tensorflowlite_flex.dll buildDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting
@dependabot rebase
.Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/quantylab/rltrader/network/alerts).