Open dsmukilan opened 2 months ago
Hi, thanks for the suggestion!
I really like this idea as it's very specific! I think it might fit under "reinforces existing bias". We are thinking about a more specific categorisation of the hazard labels where they would be related to eachother in a knowledge graph.
For example something like AI sourced data
---causes
---> Reinforcement of existing biases
.
I'm on mat leave at the moment (hence the slow response) but I look forward to thinking about how this could fit in and getting some feedback from others!
Adding recent reference : https://doi.org/10.1038/s41586-024-07566-y
I am suggesting a new category of Data Hazard called "AI Sourced Data". Suggested symbol : Ouroboros
These would be cases in which the data is scrapped over the internet or any other sources, which turns out to be AI-generated data. These scrapped data will then be used to train more AI models, thereby creating a negative feedback loop making worse and worse trained models.
This can be intentional in some aspects - for example : "Nightshade" - AI Poisoning for protecting Copyrights. But in many cases ,this can be oversight on training or direct malicious intent of sabotaging.
Also such models trained with 'AI sourced data', can further reinforce other data hazards such as existing bias, privacy issues, and more.