ossf / ai-ml-security

Potential WG on Artificial Intelligence and Machine Learning (AI/ML)
Apache License 2.0
53 stars 9 forks source link

AI/ML Security WG

This is the GitHub repository of the OpenSSF Artificial Intelligence / Machine Learning (AI/ML) Security Working Group (WG). The OpenSSF Technical Advisory Council (TAC) approved its creation on 2023-09-05.

The AI/ML Security Working group is officially a sandbox level working group within the OpenSSF.

Objective

This WG explores the security risks associated with Large Language Models (LLMs), Generative AI (GenAI), and other forms of artificial intelligence (AI) and machine learning (ML), and their impact on open source projects, maintainers, their security, communities, and adopters.

This group in collaborative research and peer organization engagement to explore topics related to AI and security. This includes security for AI development (e.g., supply chain security) but also using AI for security. We are covering risks posed to individuals and organizations by improperly trained models, data poisoning, privacy and secret leakage, prompt injection, licensing, adversarial attacks, and any other similar risks.

This group leverages prior art in the AI/ML space,draws upon both security and AI/ML experts, and pursues collaboration with other communities (such as the CNCF's AI WG, LFAI & Data, AI Alliance, MLCommons, and many others) who are also seeking to research the risks presented by AL/ML to OSS in order to provide guidance, tooling, techniques, and capabilities to support open source projects and their adopters in securely integrating, using, detecting and defending against LLMs.

Vision

We envision a world where AI developers and practitioners can easily identify and use good practices to develop products using AI in a secure way. In this world, AI can produce code that is secure and AI usage in an application would not result in downgrading security guarantees.

These guarantees extend over the entire lifecycle of the model, from data collection to using the model in production applications.

The AI/ML security working group wants to serve as a central place to collate any recommendation for using AI securely ("security for AI") and using AI to improve security of other products ("AI for security").

Scope

Some areas of consideration this group explores:

Anyone is welcome to join our open discussions.

WG Leadership

Co-Chairs:

How to Participate

Current Work

We welcome contributions, suggestions and updates to our projects. To contribute to work on GitHub, please fill in an issue or create a pull request.

Projects:

The AI/ML WG has voted to approve the following projects:

Name Purpose Creation issue
Model signing Cryptographic signing for models #10

More details about the projects:

Upcoming work

This WG is currently exploring establishment of an AI Vulnerability Disclosure SIG. Please refer to the group's meeting notes for more information.

See also the MVSR document, which also contains other AI/ML working groups we are interlocking with.

Licenses

Unless otherwise specifically noted, software released by this working group is released under the Apache 2.0 license, and documentation is released under the CC-BY-4.0 license. Formal specifications would be licensed under the Community Specification License.

Charter

Like all OpenSSF Working Groups, this group reports to the OpenSSF Technical Advisory Council (TAC). For more information see this Working Group Charter.

Antitrust Policy Notice

Linux Foundation meetings involve participation by industry competitors, and it is the intention of the Linux Foundation to conduct all of its activities in accordance with applicable antitrust and competition laws. It is therefore extremely important that attendees adhere to meeting agendas, and be aware of, and not participate in, any activities that are prohibited under applicable US state, federal or foreign antitrust and competition laws.

Examples of types of actions that are prohibited at Linux Foundation meetings and in connection with Linux Foundation activities are described in the Linux Foundation Antitrust Policy available at http://www.linuxfoundation.org/antitrust-policy. If you have questions about these matters, please contact your company counsel, or if you are a member of the Linux Foundation, feel free to contact Andrew Updegrove of the firm of Gesmer Updegrove LLP, which provides legal counsel to the Linux Foundation.