Adversarial-Deep-Learning / code-soup

This is a collection of algorithms and approaches used in the book adversarial deep learning
MIT License
18 stars 18 forks source link

Visual Perturbation Metrics #61

Open someshsingh22 opened 2 years ago

someshsingh22 commented 2 years ago

For evasive whitebox or blackbox attacks, the objective of each attack is to fool the model to predict a different class but making it deceptive by making small changes, these changes are measured in distances for Example the L1/L2 Norm of difference.

Implement these metrics

You can find numpy and cv2 implementation at https://github.com/up42/image-similarity-measures/blob/master/image_similarity_measures/quality_metrics.py

ShreeyashGo commented 2 years ago

I would like to take this up. I plan on creating a function that will take the original and the image/feature vector after the adversarial attack and output the L1, L2, L(infinity) norms. Should I also be implementing L0 norms and other less conventional norms like L3, L4.... ?

devaletanmay commented 2 years ago

I would like to take up this issue

someshsingh22 commented 2 years ago

@ShreeyashGo @devaletanmay we are not using separate classes as the code doesn't look very clean with many metrics, also I am adding more metrics you can split the implementation

ShreeyashGo commented 2 years ago

After implementing the SRE and SAM, these are my observations Test SRE: 41.36633261587073 SRE from implementation: 39.9395

Test SAM: 89.34839413786915 (with default dtype of the inputted numpy array as integer) Test SAM: 34.38530383960234 (with dtpye of the inputted numpy array changed to float) SAM from implementation: 34.385303497314453

someshsingh22 commented 2 years ago

Ok we will change the test