Shawn-Shan / fawkes

Fawkes, privacy preserving tool against facial recognition systems. More info at https://sandlab.cs.uchicago.edu/fawkes
https://sandlab.cs.uchicago.edu/fawkes
BSD 3-Clause "New" or "Revised" License
5.22k stars 481 forks source link

No effect on AWS Rekognition? #138

Open pospielov opened 3 years ago

pospielov commented 3 years ago

I just downloaded two images, original and cloaked, from your website and uploaded them to AWS Rekognition. The results are 100% "similarity". Did you upload the wrong images (I checked all of them, including Obama - that have different sizes)? image

tbeckenhauer commented 3 years ago

I see the same effect with my photos. In case I don't get back to this, I realized I had only run this with --mode=low. Currently processing --mode=high, but it's taking a while.

tbeckenhauer commented 3 years ago

Ok, I processed this myself, and this test was run with --mode=high. The alterations are visible enough that you can see them. AWS Rekognition thinks they are the same person.

Comparison Original to High

tbeckenhauer commented 3 years ago

So I considered it wasn't a realistic test to compare a before and after. I was thinking what we need to test are two different images of the same person that have been run through fawkes. Well I did that, and the results are not good. Basically we get 99.9, 99.5, and 99.3% similarity for low, mid, and high. I imagine these facial recognition tools saw all the publicity for fawkes and started training their networks to recognize cloaked images. Maybe generative adversarial networks would be a good next step, but I am not an expert on this. I imagine we would need those scripts for automating the testing.

Low Comparison Obama Low Mid Comparison Obama Mid High Comparison Obama High

ghost commented 1 year ago

https://www.theregister.com/2022/03/15/research_finds_data_poisoning_cant/ This issue suggests that big players already trained their models to resist data poisoning. Fawkes is about done for.