Closed xlflove closed 2 months ago
Thank you for your interest in our work. Our study began with recognizing the vulnerability of face recognition systems to minor face alignment errors. We then proceeded to implement methods to mitigate this vulnerability.
Initially, we experimented with augmentations like Gaussian blur and downsampling, but as you rightly pointed out, their parameters do not support gradient backpropagation, which limited their effectiveness in FR.
If you have any further questions or need more details, feel free to ask!
I tried to make L2 loss the embedding output of the original img and the affine transformed img under the model, and then add it to the loss, and the test performance will be better.
I am curious about how you came up with this idea. Before this, I considered using Gaussian blur and downsampling, but unfortunately, the parameters of these two methods do not support gradient backpropagation, so I gave up on them. After reading your article, I realized that the parameters of affine transformations such as translation and scaling do support gradient backpropagation.
So what I'm curious about is whether you came up with this idea first and then wrote the code, or did you have the code first and then came up with the "story" of the FAE loss?