Closed dewball345 closed 2 years ago
Hmm, I'm a little confused about these tests, because for Barlow Twins a lot of these parameters aren't needed(ex. num_augmentations_per_image, y, etc.). Would it be okay if it didn't extend the Augmenter class?
Hi @dewball345, thanks for the PR. I think it's beneficial to provide these architecture specific augmenters for reproducing the paper results.
On note, we're looking at removing a lot of the image functions from the augmenter module and instead moving to using keras-cv. One concern is that we are adding a lot of image specific code that is not core to the similarity package. Having said that, as I mentioned, I think it's good to provided the augmenters so we can reproduce the results.
Let's add this PR to the development branch instead of the master branch, and I left some comments on the module.
Thanks.
Interesting, i've never heard of keras-cv before, so will be interesting to try and implement that. It looks to me that not all augmentation utilities have been implemented there, though.
@owenvallis Okay, looks like all the checks have worked and have placed augmentation utils in a shared module
Thanks again for the PR. Refactoring the image augmentation functions into their own module is a huge help.
I'll chat with @ebursztein and see if we can move the changes to the master branch. Until then, they are available on the development branch and in tfsim-nightly.
Thanks! Seems great :)
@dewball345 can we add test coverage for the augmenters and the individual augmentation functions? Once we have that we can merge into master. Thanks.
@dewball345 can we add test coverage for the augmenters and the individual augmentation functions? Once we have that we can merge into master. Thanks.
Apologies if this is obvious, but how would I do this? Do I test every part of the function?
Generally, we want to write tests for the invariants like flags in the params and conditional branches in the function, although it can also be good to write a basic check that the output is what we expect.
There are some good examples in the tests directory, but tests/test_layers.py shows the use a test fixture and the tests are pretty focused. You can also run the coverage report locally using coverage run -m pytest tests/
. The coverage report will help show which lines of the code still need test coverage.
@owenvallis I've added test coverage, seems like everything works out as expected
This fork includes an implementation of the augmenters used in Barlow Twins. This is an adaptation of the paper Barlow Twins: Self-Supervised Learning via Redundancy Reduction, and much of the code has been borrowed from this.
Apologies if there are any mistakes in this pull request, this is my first to this repository.