idena-network / idena-web

https://app.idena.io
MIT License
25 stars 18 forks source link

the adversarial noise prevents humans from correctly assessing and reporting flips #364

Open ivantor69 opened 1 year ago

ivantor69 commented 1 year ago

I find that the noise filter makes Idena unusable, as it degrades color and noise a lot, way too much. If you don't want to prioritize making a better algorithm for the filter, I will roll back the windows app version so that it works without using the noise filter, and I won't click update app even if the warning appears.. One more thing: why couldn't you swing the image (window) so that the bot using image recognition software can't understand. Isn't that a good idea? Please understand my frustration: I had a flip with a man eating potatoes and after the filter the color changed into yellow and people thought it was lemons!!! And for this reason they reported me!

ivantor69 commented 1 year ago

original kloop api

filter kloop f api

it recognizes the hat and the person well but except for the glasses.. (this depends on whether the key is present) so it is useless to filter!

midenaio commented 1 year ago

@ivantor69 Thank you for the feedback! Sorry to hear that your flip was reported. Could you share the flip screenshot (App>Flips>Archive) and the blockchain explorer link? Please send it to info@idena.io so we could investigate.

There is a "Regenerate" button on the Protect images step. If the color shift does not work for the image, you can regenerate to get acceptable colors. Please always check if the images are still recognizable before submitting the flip.

Adversarial noise combined with color shifting deteriorates the results of image recognition software to some extent. In our tests people, cats, dogs and cars (these are the most popular image categories on the Internet) are detected very well by Google Vision and it is quite complicated to make Google think that it is not a person, but an animal for instance. Nevertheless, the noise prevents from detecting the details on the image which could be crucial for solving the flip, like in your example - the glasses are not detected (maybe the flip is related to glasses). In addition, other objects that don’t belong to popular image categories could be detected incorrectly. Please see some examples here.

ivantor69 commented 1 year ago

I sent email today!

ivantor69 commented 1 year ago

I'm sorry to contradict you that it's not enough as it says to regenerate until the color varies.. but unfortunately it changes a lot of color during the regeneration.. the problem I'm sure if something is wrong your algorithm that varies the color bitmaps and I've done others flip not even one has passed with normal colors. but I mean everyone!!! I ask for filter removal or use the old previous version where it does not ask for the filter! flip2 flip1

midenaio commented 1 year ago

@ivantor69 I see your point. When you look just at one noised image, sometimes you can't say for sure what's on it. But please do not forget that any flip is always a combination of images and they must be logically connected, not just a bunch of random pictures. So the context of the flip should give you a hint what is shown on this particular image. Otherwise, some may want to report it. Now when creating a new flip, an author should choose the images more carefully so that it is still possible to recognize what's on the images even after the noise and color shift is applied.

By the way, you posted a good example of how color shifting with noise works. Google detects the content of this image as an animal. So on this particular example our adversarial technique gives misleading results for AI 111