Closed dynamicwebpaige closed 4 years ago
For the Gsoc probably we need to think more generally about preprocessing. We have still numpy based operations in https://keras.io/preprocessing/image/. Also It needs to be investigated if there will be any integration between AutoAugment/Autokeras.
For the Gsoc probably we need to think more generally about preprocessing. We have still numpy based operations in https://keras.io/preprocessing/image/.
@bhack can you please elaborate more on what you mean by this. Thanks
Many images operations in Keras are still not in addons or in tf.image
but numpy implemented:
https://github.com/keras-team/keras-preprocessing/tree/master/keras_preprocessing/image
Other image operations are going in TF.IO or TF.graphics?
I think that we need to unify image processing so that eventually ops could benefit from the compiler stack soon or later (MLIR & friends).
Also about AutoKeras can the policy be handled by Autokeras so that autoaugment could be used more generally in other projects/experiments with Autokeras+keras_preprocessing instead of being embedded in Efficientnet?
Also about AutoKeras can the policy be handled by Autokeras so that autoaugment could be used more generally in other projects/experiments with Autokeras+keras_preprocessing instead of being embedded in Efficientnet?
I don't think AutoKeras
handles policies as such. And if policies are just series of operations we could store them maybe as named tuples. A parser (for the lack of a better word) could then return a ImageAugmentation
object using a policy. The methods of this class could use Keras preprocessing, tf.image
etc. Seems messy. Is this better than porting already implemented image ops in tfa.image?
We could train models with various policies using this as proof of concept.
AutoKeras
was "recently" already integrated with keras preprocessing https://github.com/keras-team/autokeras/pull/922/files.
I agree that probably we can investigate what is needed to expand over Autoaugment (or other kind of) policies.
But I think that in AutoAugmentation is also important to have ops that could benefit maximum performance (also in scheduling) cause you will not do these operations offline.
I'm interested in it and I have benefited a lot from AutoAugment
.
So while looking at how AutoKeras is handling hyperparameter containers, at first glance it seems like a suitable replacement for Hparams: https://github.com/keras-team/keras-tuner/blob/master/kerastuner/engine/hyperparameters.py#L464
We initially decided not to move HParams when we left tf.contrib (even though its convient and works very well) since we didn't want to diverge from officially support APIs.
Adding KerasTuner as a dependency has its own challenges, but I'm wondering if there is a better way to align with the eco-system though using it.
@omalleyt12 @gabrieldemarmiesse Do you have any thoughts about use re-using KerasTuner's HyperParameter object (likely overkill for whats needed in AutoAugment). Or just general thoughts on how AutoAugment in addons fits with the KerasTuner and Keras Preprocessing advances?
We still have also AutoAugment variant policy in https://github.com/google-research/remixmatch
Should these augmentations be added as part of a submodule in addons?
I don't know. Probably.. It had its own sub "library" for augmentation :smile: https://github.com/google-research/remixmatch/tree/master/libml It is quite standard to find fragmentation under google-research repos.
/cc @carlini
The remixmatch repository is intended to faithfully reproduce the the experiments of the corresponding ICLR'20 paper. We do not intend for this repository to be the source of truth for any particular implementation.
@seanpmorgan I don't think that we should depend on Keras-Tuner, except when implementing papers where the end result is a hyperparameter search algorithm.
For papers that use a search algorithm to produce a result, like AutoAugment and RandAugment, we should hardcode the final numbers here and here and make sure our API/architecture is modular enough for other people to plug it into an hyperparameter search algorithm.
In short, let's make it easy for users to change those values, it's up to them to plug our API in keras-tuner if they want.
@gabrieldemarmiesse What do you think about Remixmatch AutoAugment variant that learns the policy on training? https://github.com/google-research/remixmatch/blob/master/libml/augment.py
@bhack could you expand? I'm not sure I understand your question.
@gabrieldemarmiesse I meant how we could generally organize policies as I think that with just a first AutoAugment variant I think we would organize polices a little bit.
ReMixMatch's augment policy (CTA) is slightly different from standard augmentation policies because it requires to integrate it with the training loop. At every minibatch step, the policy needs to be "trained" with a second minibatch of examples so that it can determine the magnitude of perturbations that are allowed.
cc @david-berthelot
Yes we need to consider also https://github.com/google-research/fixmatch but it seems to me it doesn't introduce new augmentation policies.
AutoAugment is also (is it just a copy?) in today released EfficientDet https://github.com/google/automl/blob/master/efficientdet/aug/autoaugment.py. We really need a place, like this one, to handle an official AutoAugment extendible API. :smile:
Fixmatch uses the same augmentations as ReMixMatch.
Thank you for the FixMatch confirmation. Instead I see that @mingxingtan in EfficientDet added some extra AutoAugmentations
@dynamicwebpaige Hello i want to work on this i am applying for this year GSOC. So did all of the operations are present in Tensorflow Addons. Or we need to create them from scratch.
@vinayvr11 There is an inital PR at https://github.com/tensorflow/addons/pull/1275. If you can start contribute some ops with PR before be evalauted for GSOC It is better. Please open a ticket and mention @abhichou4 to not overlap with him before you start.
I'll make an issue regarding this, listing all image ops to add. We can also discuss how they can be handled more generally in tfa.image
.
Hello @bhack i have added three more operations to Addons now i want to know that do we have more image operations that are not mention in the list or the list have all the operations.
@vinayvr11 Can you post here the ops that are still not in list?
@bhack : These are the operations which are remaining- to_grayscale color_jitter color_jitter_nonrand color_jitter_rand compute_crop_shape center_crop distorted_bounding_box_crop crop_and_resize gaussian_blur random_crop_with_resize random_color_jitter random_blur batch_random_blur Shall i also add the bbox operations
You can open a new ticket and mention It here after you have checked that osp are not already listed in https://github.com/tensorflow/addons/issues/1333. I suggest you to collect a bbox operations separate issue and mention It here. Double check all the source that be have mentioned if there is any other missing ops.
Sure
You can open a new ticket and mention It here after you have checked that osp are not already listed in #1333. I suggest you to collect a bbox operations separate issue and mention It here. Double check all the source that be have mentioned if there is any other missing ops.
Some ops are not checked in https://github.com/tensorflow/addons/issues/1333 so can i add them also in new ticket.
You can open a new ticket and mention It here after you have checked that osp are not already listed in #1333. I suggest you to collect a bbox operations separate issue and mention It here. Double check all the source that be have mentioned if there is any other missing ops.
Some ops are not checked in #1333 so can i add them also in new ticket.
I'll make sure the rest of the color operations get merged. @vinayvr11 It would be neat to have another ticket for just bounding box and crop operations.
I just created a new ticket for image processing and crop operations.
All the tickets are created @bhack.
@bhack i want to know something if we add all these images operations now then what we have left for GSOC.
I suppose it will be hard to merge all these ops before the application deadline but Mentors could use the PR activities to best evaluate candidates. Also there Is still all the activity about the policy and hyperparam API design for so we could have a pluggable system that It could be valid also for policies that interact with the train loop etc.. If the todo will be not enough for a proposal you could add some interesting image processing operations available in pythorch i.e. Kornia
Ok Thank you @bhack i will check it and can i also take some operations from pillow or opencv.
@seanpmorgan @gabrieldemarmiesse We need still if think if colorspace conversions are inside addon or io namaspace. See https://github.com/tensorflow/io/issues/825
@bhack : i have created the tickets but i don't know that who will merge them to the repository so could you please help me with this. Thank you
@vinayvr11 Don't worry cause all non Google employees are volunteers here. As soon as there is a free slot we will go ahead with the reviews.
Ok thank you @bhack.
Hello @bhack could you please help me to find out some more issues in Tensorflow Addons or tf.image() so that i can mention them in GSOC. Thank you
@vinayvr11 these are general hints for every studen. I suppose that all the image ops and bboxs ops the we have listed from the sparse repositoriees plus the design of an augmentation policy API to porting the policies of the relative repositories that we have mentioned could be enough for a proposal. There are also some colorspace conversions that I mentioned if you want to expand the ops coverave and there are also some operations in Mediapipe calculators that could be interesting to see if could be covered in addon. For all the applicants I suggested to study a little bit the policy papers that we have mentioned and the relative repositories to try to make a credibile estimation of the amount of work and figure out a timeline in the proposal. Having a credibile roadmap in the proposal is a positive evaluation point for the Mentor to figure out that you have really understand the nature of the work that needs to be done in the GSOC.
I suggest you also to partially go ahead, as you can, with PR so that they have a valid sample of your coding. I.e. If possibile take an operator that it Is not already already expressed in Tensorflow (i.e. Numpy/PIL) in these referenced repositories so that Mentors could have a feedback about Tensorflow coding ability other than porting.
Thank you very much for this @bhack. Actually i also found some loss functions and optimizers that are listed in tf.contrib but not in Tensorflow 2.x so can i also mention them in proposal.
@bhack : could you please review my proposal this will be a great help for me https://docs.google.com/document/d/1mv32xoGI08JP1wcMiugTyVBzCee7dsyYK_5Uf6SjxEE/edit?usp=sharing
@vinayvr11 See @dynamicwebpaige Best Practice on how to collect a feedback.
@dynamicwebpaige I don't know how many slots we could have on similar tasks at GSOC but another related "proxy task" could be image text augmentation like CVPR 2020 https://github.com/Canjie-Luo/Text-Image-Augmentation/
@bhack @vinayvr11 please keep this thread focused on autoaugment and randaugment. Feel free to use direct messages or to open new issues if you think the topic changed. Having an issue with 40+ messages make the life of the maintainers quite hard.
@gabrieldemarmiesse probably was better to open a Gitter channel dedicated to Gsoc other than Gitter addons for this kind of threads to not force ISSUES to go off-topic. Google doesn't official support any realtime chat channel and If you see the Google Summer of code Tensorflow page still points to https://github.com/tensorflow/community (you find also off-topic Gsoc issue there). That repo is mainly used for official RFC also if there Is not a real policy for that repo for ISSUES. Also IHMO this ISSUE Is quite special, cause involved GSOC in the description and started wtih a very partial overview for a GSOC proposal related to image transformations and polcies. As we have seen just refencing some other repos quickly emergerd the fragmentation of independet Google teams working on this topic. I think here we are more interested about a general approach to the transformation and policies and IHMO it Is a better target for a GSOC proposal to have a complete overivew other then porting code. So this moved forward a more general discussion. For the operations as you seen we have already independent tickets and PR in addons to track the work.
How we are going to coordinate with image processing that is landing in https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/keras/layers/preprocessing/image_preprocessing.py?
Describe the feature and the current behavior/state. RandAugment and AutoAugment are both policies for enhanced image preprocessing that are included in EfficientNet, but are still using
tf.contrib
.https://github.com/tensorflow/tpu/blob/master/models/official/efficientnet/autoaugment.py
The only
tf.contrib
image operations that they use, however, are rotate, translate and transform - all of which have been included in TensorFlow Addons.Relevant information
Are you willing to contribute it (yes/no): No, but am hoping that someone from the community will pick it up (potentially a Google Summer of Code student)?
Are you willing to maintain it going forward? (yes/no): Yes
Is there a relevant academic paper? (if so, where): AutoAugment Reference: https://arxiv.org/abs/1805.09501 RandAugment Reference: https://arxiv.org/abs/1909.13719
Is there already an implementation in another framework? (if so, where): See link above; this would be a standard migration from
tf.contrib
.Was it part of tf.contrib? (if so, where): Yes
Which API type would this fall under (layer, metric, optimizer, etc.) Image
Who will benefit with this feature? Anyone doing image preprocessing, especially for EfficientNet.