Open seanpmorgan opened 1 year ago
As with everything else on this community led repository... please share your thoughts, concerns, etc. so we can facilitate a healthy discussion.
@seanpmorgan yeah this is a hard decision but we always wanted to reduce those overlaps, and this makes sense. I just want to take a moment to thank you, the other maintainers, and the community members for their efforts! 🍻
First, I wanted to thank the maintainers of TensorFlow Addons for their job, I have been using TF Addons happily for several years now! Thanks!
I use the following functionality of TF Addons that are currently not in any TF/Keras package, as far as I know:
the LazyAdam: for NLP tasks with large embedding matrices and small batch sizes, LazyAdam seems to deliver better performance than Adam (measured across ~5 NLU tasks).
The ideal approach here is probably to add the support for lazy=False/True
directly to the tf.keras.optimizers
, if the Keras authors would agree.
the seq2seq module: Keras NLP has keras_nlp.samplers
, which provides the decoding; but it was nice having the BasicDecoder
together with prepared attention mechanisms; but maybe there are plans of adding it to keras_nlp
.
the CRF layer: I am using tfa.text.crf_log_likelihood
and tfa.text.crf_decode
in a few projects; hopefully some kind of CRF will be added to keras_nlp
too.
Sorry I am newbie but please put "InstanceNormalization" into normal tf layers, is this okay?
@ayaderaghul
https://www.tensorflow.org/api_docs/python/tf/keras/layers/GroupNormalization
Relation to Instance Normalization: If the number of groups is set to the input dimension (number of groups is equal to number of channels), then this operation becomes identical to Instance Normalization.
@ayaderaghul
https://www.tensorflow.org/api_docs/python/tf/keras/layers/GroupNormalization
Relation to Instance Normalization: If the number of groups is set to the input dimension (number of groups is equal to number of channels), then this operation becomes identical to Instance Normalization.
So, does it mean that in the case of color image (number of channels = 3) ,we set the number of groups to 3?
@ayaderaghul that's what it sounds like to me.
I have a project using pix2pix in tensorflow_examples, similar to this: (https://www.tensorflow.org/tutorials/generative/cyclegan)
In this project, the model is pix2pix, with InstanceNorm
generator_g = pix2pix.unet_generator(OUTPUT_CHANNELS, norm_type='instancenorm')
When I train the model, save the model, and then continue to train, it returns an error saying something like "No InstanceNormalization". I have to install tensorflow_addons, add the InstanceNormalization like a custom object to load the model.
I have tried the following alternatives but they don't work:
Please help!
Is there a replacement for tfa.optimizers.CyclicalLearningRate
?
Searching the internet turned up bckenstler/CLR
, which is many years old, and its newer fork brianmanderson/Cyclical_Learning_Rate
, which is also two years old, but no officially preferred solution.
CLR isn't that complicated, so if it's not part of any actively maintained package, I'll probably just grab a local copy of the class into my project tree.
Will the public document indicate how to replace each functions of tfa with alternatives of keras ? Example: RectifiedAdam optimizers , Lookahead optimizer
Is the MultiOptimizer available elsewhere?
Hi,
Is there any equivalent to dense_image_warp in KerasCV/Keras?
TFA meant a lot to me. Thank you.
after i install program i click on create league i get typeerror what should i do
Dear contributors and users of TensorFlow Addons,
As many of you know, TensorFlow Addons (TFA) is a repository of community maintained and contributed extensions for TensorFlow, first created in 2018 and maintained by the SIG-Addons community. Over the course of 4 years, 200 contributors have built the TFA repository into a community owned and managed success that is being utilized by over 8,000 github repositories according to our dependency graph. I’d like to take a moment to sincerely thank everyone involved as a contributor or community member for their efforts.
Recently, there has been increasing overlap in contributions and scope between TFA and the Keras-CV and Keras-NLP libraries. To prevent future overlap, we believe that new and existing addons to TensorFlow will be best maintained in Keras project repositories, where possible.
Decision to Wind Down TensorFlow Addons
We believe that it is in the best interest of the TensorFlow community to consolidate where TensorFlow extensions can be utilized, maintained and contributed. Because of this, it is bittersweet that we are announcing our plans to move TensorFlow Addons to a minimal maintenance and release mode.
TFA SIG Addons will be ending development and introduction of new features to this repository. TFA will be transitioning to a minimal maintenance and release mode for one year in order to give appropriate time for you to adjust any dependencies to the overlapping repositories in our TensorFlow community (Keras, Keras-CV, and Keras-NLP). Going forward, please consider contributing to the Keras-CV and Keras-NLP projects.
Background:
The original RFC proposal for TFA was dated 2018-12-14 with the stated goal of building a community managed repository for contributions that conform to well-established API patterns, but implement new functionality not available in core TensorFlow as defined in our Special Interest Group (SIG) charter.
As the years have progressed, new repositories with healthy contributor communities (Keras-CV, Keras-NLP, etc.) have been created with similar goals to ours and the criteria for contribution acceptance overlaps significantly (e.g. number of required citations). Additionally, since Keras split out of core TensorFlow in 2020, the barrier for community contribution has been substantially lowered.
Understandably, there has been increasing ambiguity regarding where contributions should land and where they will be best maintained. Many features that are available in TFA are simultaneously available in other TensorFlow Community repositories. As just a few examples:
Random Cutout: TFA & Keras-CV AdamW Optimizer: TFA & Keras Multihead Attention: TFA & Keras
As part of the original RFC, our Special Interest Group agreed to migrate code from tf.contrib and keras.contrib repositories. In doing so, TFA inherited C++ custom-ops, which made TFA a unique place in the TensorFlow community to contribute C++ custom ops to be built and distributed. However, we’ve recently helped in migrating much of that infrastructure to Keras-CV so that they can compile and distribute custom ops as they see fit.
What’s Next: