responsible-ai-collaborative / aiid

The AI Incident Database seeks to identify, define, and catalog artificial intelligence incidents.
https://incidentdatabase.ai
Other
170 stars 35 forks source link

Footer Structure and Orphaned Page Breakup #1126

Closed smcgregor closed 2 years ago

smcgregor commented 2 years ago

I don't see where this page is linked in the footer: https://incidentdatabase.ai/research

We should start by adding the page to the footer, but then we should likely devolve the page into its contents, which can also be linked on the footer.

Download the Index
The complete state of the database can be downloaded in weekly JSON, MongoDB, and CSV format [snapshots](https://incidentdatabase.ai/research/snapshots). We maintain these snapshots so you can create stable datasets for natural language processing research and academic analysis. Please [contact](https://incidentdatabase.ai/contact) to let us know what you are using the database for so we can list your work in the incident database and esnure your use case is not dropped from support.

Citing the Database as a Whole
We invite you to cite:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

The [pre-print](https://arxiv.org/abs/2011.08512) is available on arXiv.

Citing a Specific Incident
Every incident has its own suggested citation that credits both the submitter(s) of the incident and the editor(s) of the incident. The submitters are the people that submitted reports associated with the incident and their names are listed in the order in which their submissions were added to the AIID. Since reports can be added to an incident record through time, our suggested citation format includes the access date. You can find incident citations at https://incidentdatabase.ai/cite/INSERT_NUMBER_HERE.

2022 (through September 12th)

NIST. Risk Management Playbook. 2022 Shneiderman, Ben. Human-Centered AI. Oxford University Press, 2022. Schwartz, Reva, et al. "Towards a Standard for Identifying and Managing Bias in Artificial Intelligence." (2022). McGrath, Quintin et al. An Enterprise Risk Management Framework to Design Pro-Ethical AI Solutions." University of South Florida. (2022). Nor, Ahmad Kamal Mohd, et al. "Abnormality Detection and Failure Prediction Using Explainable Bayesian Deep Learning: Methodology and Case Study of Real-World Gas Turbine Anomalies." (2022). Xie, Xuan, Kristian Kersting, and Daniel Neider. "Neuro-Symbolic Verification of Deep Neural Networks." arXiv preprint arXiv:2203.00938 (2022). Hundt, Andrew, et al. "Robots Enact Malignant Stereotypes." 2022 ACM Conference on Fairness, Accountability, and Transparency. 2022. Tidjon, Lionel Nganyewou, and Foutse Khomh. "Threat Assessment in Machine Learning based Systems." arXiv preprint arXiv:2207.00091 (2022). Naja, Iman, et al. "Using Knowledge Graphs to Unlock Practical Collection, Integration, and Audit of AI Accountability Information." IEEE Access 10 (2022): 74383-74411. Cinà, Antonio Emanuele, et al. "Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning." arXiv preprint arXiv:2205.01992 (2022). Schröder, Tim, and Michael Schulz. "Monitoring machine learning models: A categorization of challenges and methods." Data Science and Management (2022). Corea, Francesco, et al. "A principle-based approach to AI: the case for European Union and Italy." AI & SOCIETY (2022): 1-15. Carmichael, Zachariah, and Walter J. Scheirer. "Unfooling Perturbation-Based Post Hoc Explainers." arXiv preprint arXiv:2205.14772 (2022). Wei, Mengyi, and Zhixuan Zhou. "AI Ethics Issues in Real World: Evidence from AI Incident Database." arXiv preprint arXiv:2206.07635 (2022). Petersen, Eike, et al. "Responsible and Regulatory Conform Machine Learning for Medicine: A Survey of Challenges and Solutions." IEEE Access (2022). Karunagaran, Surya, Ana Lucic, and Christine Custis. "XAI Toolsheet: Towards A Documentation Framework for XAI Tools." Paudel, Shreyasha, and Aatiz Ghimire. "AI Ethics Survey in Nepal." Ferguson, Ryan. "Transform Your Risk Processes Using Neural Networks." Fujitsu Corporation. "AI Ethics Impact Assessment Casebook," 2022 Shneiderman, Ben and Du, Mengnan. "Human-Centered AI: Tools" 2022 Salih, Salih. "Understanding Machine Learning Interpretability." Medium. 2022 Garner, Carrie. "Creating Transformative and Trustworthy AI Systems Requires a Community Effort." Software Engineering Institute. 2022 Weissinger, Laurin, AI, Complexity, and Regulation (February 14, 2022). The Oxford Handbook of AI Governance

2021

Arnold, Z., Toner, H., CSET Policy. "AI Accidents: An Emerging Threat." (2021). Aliman, Nadisha-Marie, Leon Kester, and Roman Yampolskiy. "Transdisciplinary AI Observatory—Retrospective Analyses and Future-Oriented Contradistinctions." Philosophies 6.1 (2021): 6. Falco, Gregory, and Leilani H. Gilpin. "A stress testing framework for autonomous system verification and validation (v&v)." 2021 IEEE International Conference on Autonomous Systems (ICAS). IEEE, 2021. Petersen, Eike, et al. "Responsible and Regulatory Conform Machine Learning for Medicine: A Survey of Technical Challenges and Solutions." arXiv preprint arXiv:2107.09546 (2021). John-Mathews, Jean-Marie. AI ethics in practice, challenges and limitations. Diss. Université Paris-Saclay, 2021. Macrae, Carl. "Learning from the Failure of Autonomous and Intelligent Systems: Accidents, Safety and Sociotechnical Sources of Risk." Safety and Sociotechnical Sources of Risk (June 4, 2021) (2021). Hong, Matthew K., et al. "Planning for Natural Language Failures with the AI Playbook." Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 2021. Ruohonen, Jukka. "A Review of Product Safety Regulations in the European Union." arXiv preprint arXiv:2102.03679 (2021). Kalin, Josh, David Noever, and Matthew Ciolino. "A Modified Drake Equation for Assessing Adversarial Risk to Machine Learning Models." arXiv preprint arXiv:2103.02718 (2021). Aliman, Nadisha Marie, and Leon Kester. "Epistemic defenses against scientific and empirical adversarial AI attacks." CEUR Workshop Proceedings. Vol. 2916. CEUR WS, 2021. John-Mathews, Jean-Marie. L’Éthique de l’Intelligence Artificielle en Pratique. Enjeux et Limites. Diss. université Paris-Saclay, 2021. Smith, Catherine. "Automating intellectual freedom: Artificial intelligence, bias, and the information landscape." IFLA Journal (2021): 03400352211057145

smcgregor commented 2 years ago

One more citation just came through: Braga, Juliao, et al. "PROJECT FOR THE DEVELOPMENT OF A PAPER ON ALGORITHM AND DATA GOVERNANCE." (2022).

It was originally in Portuguese: Braga, Juliao, et al. "PROJETO PARA O DESENVOLVIMENTO DE UM ARTIGO SOBRE GOVERNAN «A DE ALGORITMOS E DADOS." (2022).

lmcnulty commented 2 years ago

Do we want to keep the numbering in the url slugs of the research pages (/1-criteria, /2-roadmap, /3-history)? If so I guess we'd continue with e.g. /4-related-works? I think it looks kind of weird though – it might be better to remove the number prefixes and and redirect the old urls to the non-prefixed versions.

lmcnulty commented 2 years ago

Nevermind, that seems to be required for the pages to build.