Open zanetworker opened 1 month ago
I’m interested in this topic and happy to help.
great one, I would like to contribute
@zanetworker I work for Cisco. I have just started working on a small demo project about AI powered threat modeling and vulnerability management. I can perhaps help something in these regard. All said and done I'll not call myself an AI expert but being a cloud and software security professional I can use my experience to assess and extrapolate. Please let me know if you are setting up a meeting to discuss about the project and logistics. Thank you.
@fkautz assigned you to the issue. @SophiaUgo @dehatideep great to see you both interested in this.
For starters, I created this brainstorming template/Skeleton: https://docs.google.com/document/d/1z1150HQ3kxUuixAWV75ZRf_LyHclo9-PKamhDPBuNEk/edit
I suggest we discuss this topic both in our AI bi-weekly sync and in one of the tag-security meetings, wdyt @fkautz?
Very well, I will go through the doc and if I have any question will direct them to you. Thank you.
On Thu, Aug 1, 2024 at 9:08 AM Adel Zaalouk @.***> wrote:
@fkautz https://github.com/fkautz assigned you to the issue. @SophiaUgo https://github.com/SophiaUgo @dehatideep https://github.com/dehatideep great to see you both interested in this.
For starters, I created this brainstorming template/Skeleton: https://docs.google.com/document/d/1z1150HQ3kxUuixAWV75ZRf_LyHclo9-PKamhDPBuNEk/edit
I suggest we discuss this topic both in our AI bi-weekly sync and in one of the tag-security meetings, wdyt @fkautz https://github.com/fkautz?
— Reply to this email directly, view it on GitHub https://github.com/cncf/tag-runtime/issues/177#issuecomment-2262316482, or unsubscribe https://github.com/notifications/unsubscribe-auth/AY3KAZYG6HERXPTCDCAYHRLZPHUH3AVCNFSM6AAAAABLQ2SWZGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENRSGMYTMNBYGI . You are receiving this because you were mentioned.Message ID: @.***>
@fkautz assigned you to the issue. @SophiaUgo @dehatideep great to see you both interested in this.
For starters, I created this brainstorming template/Skeleton: https://docs.google.com/document/d/1z1150HQ3kxUuixAWV75ZRf_LyHclo9-PKamhDPBuNEk/edit
I suggest we discuss this topic both in our AI bi-weekly sync and in one of the tag-security meetings, wdyt @fkautz?
I'll take a look at the template and will post my comment. thanks.
@zanetworker @fkautz I got occupied and couldn't give time to this one but I can devote some time to this one now. I am just wondering if I can still start with above template or some progress has taken place and I must see that? I'll try attending 8am PT meeting tomorrow but I can not manage it before 8:20 am PT.
Hi @dehatideep, no progress made so far on the paper. I guess we are all busy and its waiting for someone to initiate. I am planning to add more content but not before a week or two (finalizing other things). Feel free to contribute, we can also bring it up in the next bi-weekly call and potentially spin a separate meeting for paper contributors.
@zanetworker I did attend my first AI WG call yesterday and I am starting on this one. I'll start with the template you have and add things there. We can discuss it in our next meeting. Thanks.
Thank you! Looking forward to get this going 🚀
Hi, I would like to contribute to the security paper, I've added a few bullet points under Solution Space & Tools, in case that helps get the conversation started.
added reference section & comment to template - https://docs.google.com/document/d/1z1150HQ3kxUuixAWV75ZRf_LyHclo9-PKamhDPBuNEk/edit?pli=1
Folks, When I was doing research about the scope and coverage about this white paper I realized that many AI/ML security related stuffs are being done in bits and pieces. Just to make sure and streamline stuffs I did speak to OpenSSF AI/ML folks as well as tag-security and eventually figured out that Cloud Native AI Security is not taken as such, even though some of of the AI/ML security issues are discussed/worked upon in bits and pieces. So, I am convinced that we can take this one forward and I was trying to make myself well-versed with what is done and what all possibly we need to do and how, and in that process have collected a few thing which are useful. Take a look at some/all. These are useful.
Useful Resources: AI/ML Security Groups: https://docs.google.com/spreadsheets/d/1XOzf0LwksHnVeAcgQ7qMAmQAhlHV2iEf4ICvUwOaOfo/edit?gid=0#gid=0
OWASP: OWASP AI Security and Privacy Guide: https://owasp.org/www-project-ai-security-and-privacy-guide/
CNCF Cloud Native AI Whitepaper: https://tag-runtime.cncf.io/wgs/cnaiwg/whitepapers/cloudnativeai/
Cloud Native Security Whitepaper: https://www.cncf.io/reports/cloud-native-security-whitepaper/
Presentation about security and ML: https://dwheeler.com/secure-class/presentations/AI-ML-Security.ppt
OWASP Resources: https://github.com/OWASP/www-project-top-10-for-large-language-model-applications/wiki/Educational-Resources
This is a good start. Thanks Deep, let's bring it up in our meeting next week. Also, we could start a cadence to iterate on the paper on a bi-weekly basis. Lets make sure we capture enough interest to start a cadence (through the biweekly or slack).
Thank you again for looking into this :)
This is a good start. Thanks Deep, let's bring it up in our meeting next week. Also, we could start a cadence to iterate on the paper on a bi-weekly basis. Lets make sure we capture enough interest to start a cadence (through the biweekly or slack).
Thank you again for looking into this :)
Sure thing! I am as excited to proceed on this one as it gets!
Agreed, that’s why I chose to engage here. OpenSSF is primarily coverers the AI models themselves. I posted a snippet below of their scope.
We have an opportunity to provide meaningful guidance on securing AI in the context of cloud native infrastructure.
From https://github.com/ossf/ai-ml-security
“This WG explores the security risks associated with Large Language Models (LLMs) and other deep learning models and their impact on open source projects, maintainers, their security, communities, and adopters.
This group particpaites in collaborative research and peer organization engagement to explore the risks posed to individuals and organizations by LLMs and AI; such as data poisoning, privacy and secret leakage, prompt injection, licensing, adversarial attacks, and others alongside risks introduced through AI prompt guided development.”
On Fri, Sep 6, 2024 at 1:39 PM Deep Patel @.***> wrote:
Folks, When I was doing research about the scope and coverage about this white paper I realized that many AI/ML security related stuffs are being done in bits and pieces. Just to make sure and streamline stuffs I did speak to OpenSSF AI/ML folks as well as tag-security and eventually figured out that Cloud Native AI Security is not taken as such, even though some of of the AI/ML security issues are discussed/worked upon in bits and pieces. So, I am convinced that we can take this one forward and I was trying to make myself well-versed with what is done and what all possibly we need to do and how, and in that process have collected a few thing which are useful. Take a look at some/all. These are useful.
Useful Resources: AI/ML Security Groups:
https://docs.google.com/spreadsheets/d/1XOzf0LwksHnVeAcgQ7qMAmQAhlHV2iEf4ICvUwOaOfo/edit?gid=0#gid=0
OWASP: OWASP AI Security and Privacy Guide [ https://owasp.org/www-project-ai-security-and-privacy-guide/]
CNCF Cloud Native AI Whitepaper: https://tag-runtime.cncf.io/wgs/cnaiwg/whitepapers/cloudnativeai/
Cloud Native Security Whitepaper: https://www.cncf.io/reports/cloud-native-security-whitepaper/
Presentation about security and ML: https://dwheeler.com/secure-class/presentations/AI-ML-Security.ppt
OWASP Resources:
— Reply to this email directly, view it on GitHub https://github.com/cncf/tag-runtime/issues/177#issuecomment-2334776641, or unsubscribe https://github.com/notifications/unsubscribe-auth/AABBEGVHIP27VXES6NXIJH3ZVIHHFAVCNFSM6AAAAABLQ2SWZGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMZUG43TMNRUGE . You are receiving this because you were mentioned.Message ID: @.***>
Agreed, let’s bring up setting a time to meet in the next AI WG meeting.
-- Frederick F. Kautz IV
On Fri, Sep 6, 2024 at 1:44 PM Adel Zaalouk @.***> wrote:
This is a good start. Thanks Deep, let's bring it up in our meeting next week. Also, we could start a cadence to iterate on the paper on a bi-weekly basis. Lets make sure we capture enough interest to start a cadence (through the biweekly or slack).
Thank you again for looking into this :)
— Reply to this email directly, view it on GitHub https://github.com/cncf/tag-runtime/issues/177#issuecomment-2334783636, or unsubscribe https://github.com/notifications/unsubscribe-auth/AABBEGSWPPW3RU4VDCFUNK3ZVIH4NAVCNFSM6AAAAABLQ2SWZGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMZUG44DGNRTGY . You are receiving this because you were mentioned.Message ID: @.***>
The increasing adoption of AI in cloud-native environments presents a compelling case for prioritizing AI security. As AI systems become integral to decision-making and automation, the potential impact from security breaches becomes a critical concern. Compromised AI models can lead to incorrect predictions, manipulated outcomes, and even the theft of sensitive intellectual property. Moreover, regulatory compliance and customers trust are at stake when AI systems are not adequately secured. This paper should aim to address some of these concerns by providing a guide to securing AI in cloud-native environments, offering practical solutions and strategies to mitigate risks and ensure the integrity of AI-powered applications. Along these lines, here are some rough goals/ideas:
Brainstorming Template: https://docs.google.com/document/d/1z1150HQ3kxUuixAWV75ZRf_LyHclo9-PKamhDPBuNEk/edit