frsbg / adversaria

adversarial deep learning for imperceptible next-generation video watermark
1 stars 0 forks source link

Implementation #1

Closed ferasbg closed 2 years ago

ferasbg commented 3 years ago

Use this paper which is an adversarial attack that can be used on video classification models. The purpose of "robust adversarial examples" is so that they're 1) imperceptible to the eye, and 2) tightly bounded examples, and possibly 3) leverage fixed and bounded notation

ferasbg commented 3 years ago

implement algorithm to convert perturbation attack and pseudorandom function to unique perturbation key to generate custom copy for user that is traced back to them

ferasbg commented 3 years ago

How do we figure out reverse engineering the image given the perturbation key? De-transforming the image that is.

ferasbg commented 3 years ago

I wonder if the image set from the video can be iteratively perturbed by a specified norm type and norm value. Keeping the original copy to apply the same transformation and checking if it matches with the unique copy of the user PROVES that the content was disseminated from the target user.

ferasbg commented 3 years ago

The host should store all original images and the local graph of uniquely transformed images with respective perturbation keys that can be used to trace the user by using the original image unknown to the users.

ferasbg commented 3 years ago

if users apply perturbations themselves, how can you trace the image then?