kjappelbaum / pyepal

Multiobjective active learning with tunable accuracy/efficiency tradeoff and clear stopping criterion.
Apache License 2.0
38 stars 5 forks source link

chore(deps-dev): update tensorflow requirement from <2.9,>=2.5 to >=2.5,<2.12 #264

Closed dependabot[bot] closed 1 year ago

dependabot[bot] commented 1 year ago

Updates the requirements on tensorflow to permit the latest version.

Release notes

Sourced from tensorflow's releases.

TensorFlow 2.11.0

Release 2.11.0

Breaking Changes

  • The tf.keras.optimizers.Optimizer base class now points to the new Keras optimizer, while the old optimizers have been moved to the tf.keras.optimizers.legacy namespace.

    If you find your workflow failing due to this change, you may be facing one of the following issues:

    • Checkpoint loading failure. The new optimizer handles optimizer state differently from the old optimizer, which simplifies the logic of checkpoint saving/loading, but at the cost of breaking checkpoint backward compatibility in some cases. If you want to keep using an old checkpoint, please change your optimizer to tf.keras.optimizer.legacy.XXX (e.g. tf.keras.optimizer.legacy.Adam).
    • TF1 compatibility. The new optimizer, tf.keras.optimizers.Optimizer, does not support TF1 any more, so please use the legacy optimizer tf.keras.optimizer.legacy.XXX. We highly recommend migrating your workflow to TF2 for stable support and new features.
    • Old optimizer API not found. The new optimizer, tf.keras.optimizers.Optimizer, has a different set of public APIs from the old optimizer. These API changes are mostly related to getting rid of slot variables and TF1 support. Please check the API documentation to find alternatives to the missing API. If you must call the deprecated API, please change your optimizer to the legacy optimizer.
    • Learning rate schedule access. When using a tf.keras.optimizers.schedules.LearningRateSchedule, the new optimizer's learning_rate property returns the current learning rate value instead of a LearningRateSchedule object as before. If you need to access the LearningRateSchedule object, please use optimizer._learning_rate.
    • If you implemented a custom optimizer based on the old optimizer. Please set your optimizer to subclass tf.keras.optimizer.legacy.XXX. If you want to migrate to the new optimizer and find it does not support your optimizer, please file an issue in the Keras GitHub repo.
    • Errors, such as Cannot recognize variable.... The new optimizer requires all optimizer variables to be created at the first apply_gradients() or minimize() call. If your workflow calls the optimizer to update different parts of the model in multiple stages, please call optimizer.build(model.trainable_variables) before the training loop.
    • Timeout or performance loss. We don't anticipate this to happen, but if you see such issues, please use the legacy optimizer, and file an issue in the Keras GitHub repo.

    The old Keras optimizer will never be deleted, but will not see any new feature additions. New optimizers (for example, tf.keras.optimizers.Adafactor) will only be implemented based on the new tf.keras.optimizers.Optimizer base class.

  • tensorflow/python/keras code is a legacy copy of Keras since the TensorFlow v2.7 release, and will be deleted in the v2.12 release. Please remove any import of tensorflow.python.keras and use the public API with from tensorflow import keras or import tensorflow as tf; tf.keras.

Major Features and Improvements

  • tf.lite:

    • New operations supported: tf.math.unsorted_segment_sum, tf.atan2 and tf.sign.
    • Updates to existing operations:
      • tfl.mul now supports complex32 inputs.
  • tf.experimental.StructuredTensor:

    • Introduced tf.experimental.StructuredTensor, which provides a flexible and TensorFlow-native way to encode structured data such as protocol buffers or pandas dataframes.
  • tf.keras:

    • Added a new get_metrics_result() method to tf.keras.models.Model.
      • Returns the current metrics values of the model as a dict.
    • Added a new group normalization layer - tf.keras.layers.GroupNormalization.
    • Added weight decay support for all Keras optimizers via the weight_decay argument.
    • Added the Adafactor optimizer - tf.keras.optimizers.Adafactor.
    • Added warmstart_embedding_matrix to tf.keras.utils.
      • This utility can be used to warmstart an embedding matrix, so you reuse previously-learned word embeddings when working with a new set of words which may include previously unseen words (the embedding vectors for unseen words will be randomly initialized).
  • tf.Variable:

    • Added CompositeTensor as a base class to ResourceVariable.
      • This allows tf.Variables to be nested in tf.experimental.ExtensionTypes.
    • Added a new constructor argument experimental_enable_variable_lifting to tf.Variable, defaulting to True.
      • When it's set to False, the variable won't be lifted out of tf.function; thus it can be used as a tf.function-local variable: during each execution of the tf.function, the variable will be created and then disposed, similar to a local (that is, stack-allocated) variable in C/C++. Currently, experimental_enable_variable_lifting=False only works on non-XLA devices (for example, under @tf.function(jit_compile=False)).

... (truncated)

Changelog

Sourced from tensorflow's changelog.

Release 2.11.0

Breaking Changes

  • tf.keras.optimizers.Optimizer now points to the new Keras optimizer, and old optimizers have moved to the tf.keras.optimizers.legacy namespace. If you find your workflow failing due to this change, you may be facing one of the following issues:

    • Checkpoint loading failure. The new optimizer handles optimizer state differently from the old optimizer, which simplies the logic of checkpoint saving/loading, but at the cost of breaking checkpoint backward compatibility in some cases. If you want to keep using an old checkpoint, please change your optimizer to tf.keras.optimizers.legacy.XXX (e.g. tf.keras.optimizers.legacy.Adam).
    • TF1 compatibility. The new optimizer does not support TF1 any more, so please use the legacy optimizer tf.keras.optimizer.legacy.XXX. We highly recommend to migrate your workflow to TF2 for stable support and new features.
    • API not found. The new optimizer has a different set of public APIs from the old optimizer. These API changes are mostly related to getting rid of slot variables and TF1 support. Please check the API documentation to find alternatives to the missing API. If you must call the deprecated API, please change your optimizer to the legacy optimizer.
    • Learning rate schedule access. When using a LearningRateSchedule, The new optimizer's learning_rate property returns the current learning rate value instead of a LearningRateSchedule object as before. If you need to access the LearningRateSchedule object, please use optimizer._learning_rate.
    • You implemented a custom optimizer based on the old optimizer. Please set your optimizer to subclass tf.keras.optimizer.legacy.XXX. If you want to migrate to the new optimizer and find it does not support your optimizer, please file an issue in the Keras GitHub repo.
    • Error such as Cannot recognize variable.... The new optimizer requires all optimizer variables to be created at the first apply_gradients() or minimize() call. If your workflow calls optimizer to update different parts of model in multiple stages, please call optimizer.build(model.trainable_variables) before the training loop.
    • Performance regression on ParameterServerStrategy. This could be significant if you have many PS servers. We are aware of this issue and working on fixes, for now we suggest using the legacy optimizers when using ParameterServerStrategy.
    • Timeout or performance loss. We don't anticipate this to happen, but if you see such issues, please use the legacy optimizer, and file an issue in the Keras GitHub repo.

    The old Keras optimizer will never be deleted, but will not see any

... (truncated)

Commits


Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
dependabot[bot] commented 1 year ago

OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting @dependabot ignore this major version or @dependabot ignore this minor version. You can also ignore all major, minor, or patch releases for a dependency by adding an ignore condition with the desired update_types to your config file.

If you change your mind, just re-open this PR and I'll resolve any conflicts on it.