Open shijianjian opened 2 years ago
Hi @shijianjian, thanks for your excellent work on this library to make computer vision operations differentiable.
I understand that adversarial examples can be an effective way of data augmentation, which can improve the performance and robustness of DNN models. And indeed, there are some differentiable white-box attack algorithms based on gradients, such as PGD. However, what I propose in this repo is a decision-based black-box attack method which is not a differentiable operation. Wouldn't this feature conflict with the purpose of the library?
Hi @hncszyq. Indeed, but there are some operations and augmentations that can hardly be differentiable as some operators are based on int8, etc.
As an augmentation, I think most people are expecting the forward pass to work regardless of the back propagation. It looks your shadow attack is simple but can highly improve the model robustness. So that it may help many users I believe. Of course we would like to make it differentiable, but it is fine if not that easy.
BTW, soon I gonna push a commit regarding gradient estimations. It might be a good workaround to smooth the gradient flow when non-differentiable operations involved.
@shijianjian, okay, I understand and I'm looking forward to porting this code into Kornia. I'll try to make some necessary adjustments to our code later.
@hncszyq Great. We have recently adapted another similar work, that is very similar procedures as this one I believe. You may refer to PlanckianJitter. Let us know if anything.
Hi @hncszyq, thanks for this very interesting repo that represented a kind-of simple but intuitive attack towards better performances.
I am wondering if you are interested in porting this code into Kornia? We are the maintainers of that project and hope to have this feature.
Let me know if anything!
/cc @edgarriba @ducha-aiki