Open zanetworker opened 3 months ago
I’m interested in this topic and happy to help.
great one, I would like to contribute
@zanetworker I work for Cisco. I have just started working on a small demo project about AI powered threat modeling and vulnerability management. I can perhaps help something in these regard. All said and done I'll not call myself an AI expert but being a cloud and software security professional I can use my experience to assess and extrapolate. Please let me know if you are setting up a meeting to discuss about the project and logistics. Thank you.
@fkautz assigned you to the issue. @SophiaUgo @dehatideep great to see you both interested in this.
For starters, I created this brainstorming template/Skeleton: https://docs.google.com/document/d/1z1150HQ3kxUuixAWV75ZRf_LyHclo9-PKamhDPBuNEk/edit
I suggest we discuss this topic both in our AI bi-weekly sync and in one of the tag-security meetings, wdyt @fkautz?
Very well, I will go through the doc and if I have any question will direct them to you. Thank you.
On Thu, Aug 1, 2024 at 9:08 AM Adel Zaalouk @.***> wrote:
@fkautz https://github.com/fkautz assigned you to the issue. @SophiaUgo https://github.com/SophiaUgo @dehatideep https://github.com/dehatideep great to see you both interested in this.
For starters, I created this brainstorming template/Skeleton: https://docs.google.com/document/d/1z1150HQ3kxUuixAWV75ZRf_LyHclo9-PKamhDPBuNEk/edit
I suggest we discuss this topic both in our AI bi-weekly sync and in one of the tag-security meetings, wdyt @fkautz https://github.com/fkautz?
— Reply to this email directly, view it on GitHub https://github.com/cncf/tag-runtime/issues/177#issuecomment-2262316482, or unsubscribe https://github.com/notifications/unsubscribe-auth/AY3KAZYG6HERXPTCDCAYHRLZPHUH3AVCNFSM6AAAAABLQ2SWZGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENRSGMYTMNBYGI . You are receiving this because you were mentioned.Message ID: @.***>
@fkautz assigned you to the issue. @SophiaUgo @dehatideep great to see you both interested in this.
For starters, I created this brainstorming template/Skeleton: https://docs.google.com/document/d/1z1150HQ3kxUuixAWV75ZRf_LyHclo9-PKamhDPBuNEk/edit
I suggest we discuss this topic both in our AI bi-weekly sync and in one of the tag-security meetings, wdyt @fkautz?
I'll take a look at the template and will post my comment. thanks.
@zanetworker @fkautz I got occupied and couldn't give time to this one but I can devote some time to this one now. I am just wondering if I can still start with above template or some progress has taken place and I must see that? I'll try attending 8am PT meeting tomorrow but I can not manage it before 8:20 am PT.
Hi @dehatideep, no progress made so far on the paper. I guess we are all busy and its waiting for someone to initiate. I am planning to add more content but not before a week or two (finalizing other things). Feel free to contribute, we can also bring it up in the next bi-weekly call and potentially spin a separate meeting for paper contributors.
@zanetworker I did attend my first AI WG call yesterday and I am starting on this one. I'll start with the template you have and add things there. We can discuss it in our next meeting. Thanks.
Thank you! Looking forward to get this going 🚀
Hi, I would like to contribute to the security paper, I've added a few bullet points under Solution Space & Tools, in case that helps get the conversation started.
added reference section & comment to template - https://docs.google.com/document/d/1z1150HQ3kxUuixAWV75ZRf_LyHclo9-PKamhDPBuNEk/edit?pli=1
Folks, When I was doing research about the scope and coverage about this white paper I realized that many AI/ML security related stuffs are being done in bits and pieces. Just to make sure and streamline stuffs I did speak to OpenSSF AI/ML folks as well as tag-security and eventually figured out that Cloud Native AI Security is not taken as such, even though some of of the AI/ML security issues are discussed/worked upon in bits and pieces. So, I am convinced that we can take this one forward and I was trying to make myself well-versed with what is done and what all possibly we need to do and how, and in that process have collected a few thing which are useful. Take a look at some/all. These are useful.
Useful Resources: AI/ML Security Groups: https://docs.google.com/spreadsheets/d/1XOzf0LwksHnVeAcgQ7qMAmQAhlHV2iEf4ICvUwOaOfo/edit?gid=0#gid=0
OWASP: OWASP AI Security and Privacy Guide: https://owasp.org/www-project-ai-security-and-privacy-guide/
CNCF Cloud Native AI Whitepaper: https://tag-runtime.cncf.io/wgs/cnaiwg/whitepapers/cloudnativeai/
Cloud Native Security Whitepaper: https://www.cncf.io/reports/cloud-native-security-whitepaper/
Presentation about security and ML: https://dwheeler.com/secure-class/presentations/AI-ML-Security.ppt
OWASP Resources: https://github.com/OWASP/www-project-top-10-for-large-language-model-applications/wiki/Educational-Resources
This is a good start. Thanks Deep, let's bring it up in our meeting next week. Also, we could start a cadence to iterate on the paper on a bi-weekly basis. Lets make sure we capture enough interest to start a cadence (through the biweekly or slack).
Thank you again for looking into this :)
This is a good start. Thanks Deep, let's bring it up in our meeting next week. Also, we could start a cadence to iterate on the paper on a bi-weekly basis. Lets make sure we capture enough interest to start a cadence (through the biweekly or slack).
Thank you again for looking into this :)
Sure thing! I am as excited to proceed on this one as it gets!
Agreed, that’s why I chose to engage here. OpenSSF is primarily coverers the AI models themselves. I posted a snippet below of their scope.
We have an opportunity to provide meaningful guidance on securing AI in the context of cloud native infrastructure.
From https://github.com/ossf/ai-ml-security
“This WG explores the security risks associated with Large Language Models (LLMs) and other deep learning models and their impact on open source projects, maintainers, their security, communities, and adopters.
This group particpaites in collaborative research and peer organization engagement to explore the risks posed to individuals and organizations by LLMs and AI; such as data poisoning, privacy and secret leakage, prompt injection, licensing, adversarial attacks, and others alongside risks introduced through AI prompt guided development.”
On Fri, Sep 6, 2024 at 1:39 PM Deep Patel @.***> wrote:
Folks, When I was doing research about the scope and coverage about this white paper I realized that many AI/ML security related stuffs are being done in bits and pieces. Just to make sure and streamline stuffs I did speak to OpenSSF AI/ML folks as well as tag-security and eventually figured out that Cloud Native AI Security is not taken as such, even though some of of the AI/ML security issues are discussed/worked upon in bits and pieces. So, I am convinced that we can take this one forward and I was trying to make myself well-versed with what is done and what all possibly we need to do and how, and in that process have collected a few thing which are useful. Take a look at some/all. These are useful.
Useful Resources: AI/ML Security Groups:
https://docs.google.com/spreadsheets/d/1XOzf0LwksHnVeAcgQ7qMAmQAhlHV2iEf4ICvUwOaOfo/edit?gid=0#gid=0
OWASP: OWASP AI Security and Privacy Guide [ https://owasp.org/www-project-ai-security-and-privacy-guide/]
CNCF Cloud Native AI Whitepaper: https://tag-runtime.cncf.io/wgs/cnaiwg/whitepapers/cloudnativeai/
Cloud Native Security Whitepaper: https://www.cncf.io/reports/cloud-native-security-whitepaper/
Presentation about security and ML: https://dwheeler.com/secure-class/presentations/AI-ML-Security.ppt
OWASP Resources:
— Reply to this email directly, view it on GitHub https://github.com/cncf/tag-runtime/issues/177#issuecomment-2334776641, or unsubscribe https://github.com/notifications/unsubscribe-auth/AABBEGVHIP27VXES6NXIJH3ZVIHHFAVCNFSM6AAAAABLQ2SWZGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMZUG43TMNRUGE . You are receiving this because you were mentioned.Message ID: @.***>
Agreed, let’s bring up setting a time to meet in the next AI WG meeting.
-- Frederick F. Kautz IV
On Fri, Sep 6, 2024 at 1:44 PM Adel Zaalouk @.***> wrote:
This is a good start. Thanks Deep, let's bring it up in our meeting next week. Also, we could start a cadence to iterate on the paper on a bi-weekly basis. Lets make sure we capture enough interest to start a cadence (through the biweekly or slack).
Thank you again for looking into this :)
— Reply to this email directly, view it on GitHub https://github.com/cncf/tag-runtime/issues/177#issuecomment-2334783636, or unsubscribe https://github.com/notifications/unsubscribe-auth/AABBEGSWPPW3RU4VDCFUNK3ZVIH4NAVCNFSM6AAAAABLQ2SWZGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMZUG44DGNRTGY . You are receiving this because you were mentioned.Message ID: @.***>
@zanetworker @dehatideep I work as a ML Scientist at Protect AI- focussing on the security of ML models. I would like to contribute to the AI security white paper, and was wondering if I can be added to the AI WG meeting, thank you!
Can I get added too, please? I'm one of the authors of Google's AI supply chain paper and leading the work on https://github.com/sigstore/model-transparency
@mihaimaruseac @mehrinkiani Thank you. We have not created a workgroup as yet but we have enough number of folks now that we must create one. I'll work on it. @zanetworker I had taken this issue in one of the OpenSSF AI/ML meeting today and thats how folks have come to know about it. I'm going to propose some timings for workgroup in the coming cncf tag-runtime meeting and we may proceed from there. Thanks.
Can confirm many people are interested in at least a couple of topics such as a technique survey and tool survey (since much of this is new) and thoughts on what is coming (recently and into the future). Comparing known techniques like outlier detection (in a traditional sense) or any time series / statistical analysis to new ways, like use of LLM, etc. are helpful to show non-experts that people are thinking about these things and how they might compare. Just some thoughts.
Folks, I have raised an issue with tag-security to figure out coordination on this paper but that might take time to resolve because they are looking for an established process. In the meantime I believe we can and must start making progress with the set of folks who have expressed interest here. Just to make that happen, I sent a poll to this set of people through Slack to find a meeting cadence to make faster progress. Please do respond to that poll. Thank you.
Folks, Cloud Native AI Security Whitepaper group has its first meeting tomorrow (Fri 8AM PDT , 5PM CEST). Please attend. This meeting will happen Fridays in-between (1st and 3rd of the month) our regular AI WG Fri meetings. ** This is not in the calendar as yet, so please take note. We’ll use same zoom link what we use for AI WG meetings. https://zoom.us/j/9890721462?pwd=N2xyRkZaN2JWZkNmS3EzbE1HVnhEQT09 Thank you.
@dehatideep I would like to contribute to the AI security paper. Therefore, I'm commenting here to get notified of any communication regarding this. Also, I will put my suggestion in the shared doc around the personas and different cycles within the AI.
I attended the meeting today and I'm interested in contributing. Thanks :)
I also missed the meeting as it was announced too close to the event and I was already at a different thing :(
We'll post the recording soon.
Folks, Our next meeting is scheduled on Fri, Nov 01 at 8AM PDT , 5PM CEST). Please attend. This meeting takes place on Fridays in-between (1st and 3rd of the month) our regular AI WG Fri meetings. ** This is not in the calendar as yet, so please take note. We’ll use same zoom link what we use for AI WG meetings. Zoom Link
I have separated Brainstorming template and meeting notes, both are going to be a living document though. Brainstorming template will contain deliverables and meeting notes will capture meeting discussions. Of course , meeting notes will provide basis for many things we will capture in brainstorming doc. You can put items you want to discuss in the agenda.
Meeting notes Thank you.
Hi folks,
I just noticed this interesting project. If I missed the these first two meeting can I attend the following meeting discussion and contributed to the white paper? I am a CISSP and Senior Cloud Security Engineer with CKS at Wyze Labs working on several AI projects in our company like https://www.wyze.com/pages/ai-video-search?srsltid=AfmBOoqrOoDi8gZadDZh6BPMoJf_haQp-Dg6IclDe2JphehiYYqBkADk
Hi folks,
I just noticed this interesting project. If I missed the these first two meeting can I attend the following meeting discussion and contributed to the white paper? I am a CISSP and Senior Cloud Security Engineer with CKS at Wyze Labs working on several AI projects in our company like https://www.wyze.com/pages/ai-video-search?srsltid=AfmBOoqrOoDi8gZadDZh6BPMoJf_haQp-Dg6IclDe2JphehiYYqBkADk
Surely you can and you are welcome. Meeting is open and next one will happen on 11/15. We do announce it on: https://cloud-native.slack.com/archives/C05TYJE81SR. Please join the channel and simply turn up at the meeting. Thanks.
The increasing adoption of AI in cloud-native environments presents a compelling case for prioritizing AI security. As AI systems become integral to decision-making and automation, the potential impact from security breaches becomes a critical concern. Compromised AI models can lead to incorrect predictions, manipulated outcomes, and even the theft of sensitive intellectual property. Moreover, regulatory compliance and customers trust are at stake when AI systems are not adequately secured. This paper should aim to address some of these concerns by providing a guide to securing AI in cloud-native environments, offering practical solutions and strategies to mitigate risks and ensure the integrity of AI-powered applications. Along these lines, here are some rough goals/ideas:
Brainstorming Template: https://docs.google.com/document/d/1z1150HQ3kxUuixAWV75ZRf_LyHclo9-PKamhDPBuNEk/edit