Open johnpallett opened 6 years ago
And likewise the opposite: putting things in the world to make one thing appear as another, effectively concealing it’s presence.
Obscuring real world objects is already in the explainer, but I agree that 'changing' real world objects is a different (but related) vector and should be added as well.
SexyCyborg did something like this for facial recognition spoofing. https://www.youtube.com/watch?v=D-b23eyyUCo The problem is IR is not as effective as it's hyped up to be. Surprisingly Juggalo paint is very effective at spoofing facial recognition CV. https://theoutline.com/post/5172/juggalo-juggalette-facepaint-makeup-hack-beat-facial-recognition-technology?utm_source=FB&zr=fee4nrhy&zd=2&zi=xb7ddafj I think there's a double edged sword here. On one hand, this could be a threat to the CV, on the other hand, you might not want CV to recognize you if you're privacy conscious. I think this requires further research and documentation. 3D printed objects might be able to spoof CV, but I don't have the resources available to me at the moment to do that kind of testing.
It may be possible in the real world to design visual inputs that appear as certain objects to computer vision systems, but which don't look like that to users.
For example, if someone found a visual pattern that an AR system would identify as a particular object, they could place that pattern in the real world.
This has potential threat vectors in situations where the visual pattern is not identifiable to users, and may send false signals to an AR system. This in turn may provide bad information to users about the presence of such objects.
For example, if an AR system warned a user to STOP when a stop sign was detected, then a maliciously-placed false stop sign could trigger that warning; this would be particularly threatening if the user had their vision limited in some way and couldn't disambiguate for themselves whether the stop sign actually existed.