Closed jlosito closed 4 years ago
For instance, one could probably combine the AI-based Gaydar with the sentry gun and automatically commit genocide.
https://en.wikipedia.org/wiki/Genocide
Genocide is intentional action to destroy a people (usually defined as an ethnic, national, racial, or religious group) in whole or in part. The hybrid word "genocide" is a combination of the Greek word γενοσ ("race, people") and the Latin suffix -caedo ("act of killing"). The United Nations Genocide Convention, which was established in 1948, defines genocide as "acts committed with intent to destroy, in whole or in part, a national, ethnic, racial or religious group".
It is an example. Are you saying that one could not create a machine learning model in order to try to classify ethnic or national groups?
It is an example.
It is an incorrect example. It is obvious for everyone knowing the meaning of latin and greek stems.
Are you saying that one could not create a machine learning model in order to try to classify ethnic or national groups?
I have never said this ;)
Wait- are you saying that killing gay people at scale is not technically genocide?
That’s pretty outside of the scope of assessing whether Samsung SGR-A1 should be on the list (which it absolutely should, btw)
I think you could look at AI based weapon platforms in comparison to the only alternative, using armed humans. Couldn't this be viewed as an ethical use if it can be proven that the AI based sentry weapon makes fewer mistakes identifying the wrong targets than a human likely would?
Considering the specific example with South Korea guarding the border, a person fears for their own life and may take an action based on protecting themselves before fully identifying the target. Also, what if we consider that armed AI could be used more actively with less-lethal options to counter a target using lethal options? An AI won't care if it takes lethal fire while retaliating with a weapon that can only disable and likely not kill.
It could be used for terrible purposes, but what if the research leads to security systems that result in fewer casualties?
After reading through the super interesting discussion, we think it is worth it to include a new autonomous weapons section into the awful-ai list in order to keep track of that scary development. Done with d566550c063b2af107fd4d2f0fef93c7f817c102
Would the Samsung SGR-A1 fit the list? It was developed to assist South Korea with the DMZ. The gun can be configured in such a way that it tracks humans and automatically fires upon them without human assistance. I would imagine that one could alter the system so that it combines some of the other dangerous models. For instance, one could probably combine the AI-based Gaydar with the sentry gun and automatically commit genocide.