OWASP / www-project-top-10-for-large-language-model-applications

OWASP Foundation Web Respository
Other
574 stars 141 forks source link

Remove LLM05: Supply Chain Vulnerabilities in Favor of LLM03: Training Data Poisoning and LLM07: Insecure Plugin Design #119

Closed GangGreenTemperTatum closed 3 weeks ago

GangGreenTemperTatum commented 1 year ago

Background:

Proposed Changes

guerilla7 commented 1 year ago

When we threat model an LLM Application, we have to ensure that are some pre-requisites taken into consideration which make them out of scope for us to focus specifically on risks and vulnerabilities present in the area we are assessing. These may be explicitly stated or inherently understood and is not listed as a top risk during the exercise.

Fully support this statement and proposed changes.

In my experience, foundational security controls, which includes Supply Chain Security, are addressed first with a standard threat modeling approach. Followed by an ML/LLM-focused threat modeling by the security lead familiar with ML and its emerging risks. And this is where the OWASP LLM Top 10 can really shine since the security lead will have access to known security risks outside of his personal experience.

GangGreenTemperTatum commented 1 year ago

When we threat model an LLM Application, we have to ensure that are some pre-requisites taken into consideration which make them out of scope for us to focus specifically on risks and vulnerabilities present in the area we are assessing. These may be explicitly stated or inherently understood and is not listed as a top risk during the exercise.

Fully support this statement and proposed changes.

In my experience, foundational security controls, which includes Supply Chain Security, are addressed first with a standard threat modeling approach. Followed by an ML/LLM-focused threat modeling by the security lead familiar with ML and its emerging risks. And this is where the OWASP LLM Top 10 can really shine since the security lead will have access to known security risks outside of his personal experience.

Thanks @guerilla7 for the support, glad you agree on this one! 😀

Bobsimonoff commented 1 year ago

Disagree with some of the above

Training data poisoning is not necessarily a subset of supply chain vulnerability. Training data poisoning can occur if you are training your own model and not using a third-party data. Training data poisoning and supply chain vulnerability overlap only when your training data and your model are from a third-party. Consider using a user thumbs up/thumbs down to fine-tune your model. That's my model that I cure rate, I validate and I use for my fine-tuning, I'm not sure I see the supply chain there.

I will say the same about plug-ins. If a company develops their own plug-in then they may have introduced an insecure plug-in design. If the plug-in comes from a third-party, however they have a supply chain vulnerability.

I wonder if we should keep supply chain vulnerability and be specific about when something falls into it. Fortunately the examples in supply chain vulnerability, training data poisoning and I think insecure plug-in design confuses things.

Note: There are LLM application supply chain vulnerabilities that have nothing to do with plug-ins or training data or models: agents, for example. The software supply chain for large language models is so new and being built so fast that I could argue that the componentry is a significant security risk as we have already seen in multiple articles.

GangGreenTemperTatum commented 1 year ago

Training data poisoning is not necessarily a subset of supply chain vulnerability.

This is still lineage, integrity and internal supply chain (it's not always an external factor)

I wonder if we should keep supply chain vulnerability and be specific about when something falls into it.

I don't think its a sufficient enough vulnerability to have an individual entry and this list will continue to grow as technology advances but is ultimately a standard security practice which is the responsibility of the company/model provider which each have their own unique use case. I feel like each vulnerability which touches supply chain including a mention (such as it's done in Training Data Poisioning) is a lot more scalable

Again, the context of why we did this from my prior comment:

If we also take the foundation of the OWASP Top 10 lists - [OWASP Top 10 API Security Risks – 2023](https://owasp.org/www-project-top-ten/) as an example, then it is clear (or at least ot a security professional) that even though Supply Chain is not listed as a vulnerability but an organization referencing and basing their Web Application Security stack on the OWASP Top 10, certainly still need to take the whole concept of Supply Chain (via SBOM, however) is still extremely important to that software stack.

This project is a specific document designed at LLM Applications, IMO we shouldn't be putting standard security practices into this list... Take an example of User Access levels, this is a threat too and spans multiple different types of tools, systems and data but we don't ever feel the need to list this out explicitly

Bobsimonoff commented 1 year ago

Ok @GangGreenTemperTatum than I am more aligned ... that would mean, however, we need a new vulnerability to have a top 10, correct?

GangGreenTemperTatum commented 1 year ago

Correct, yes exactly. This likely won't make the v1.1 minor but I think should be introduced at 1.2 or at latest 2.0 in my opinion

jsotiro commented 1 year ago

Supply chain vulnerabilities are a key aspect in all OWASP Top 10 Lists with different names to highlight the area of concern: A06 (which has gone up in 2021) in the flagship list covers outdated and vulnerable components and no one has suggested that vulnerabilities are vulnerabilities regardless of source and therefore let's remove the entry.

Similarly, in its context, the OWASP Top 10 for APIs has API10:2023 Unsafe Consumption of APIs which talks about the risks of consuming third-party items. I am not sure anyone has argued that A06:2021 or AP10:2023 takes away from the power of their top 10s. On the contrary, it was introduced in 2023 to reflect the concerns of our increasing supply-chain dependencies. The first defense it suggests is "When evaluating service providers, assess their API security posture". This is exactly what LLM05 recommends. It would be a step backward to say "Don't worry a suit has checked it out for you in their diligence".

AI and ML supply chains become an even bigger issue because of the external data sets and more importantly, transfer learning which allows poisoned models to be taken as they are. The references in LLM05 on the model poisoning on Hugging Face and the breach of Meta's Hugging Face account illustrate the point. I do not understand the logic that says that a vulnerable component deserves a place in OWASP's flagship Top 10 but vulnerable models from Hugging Face do not. Vulnerable models could be not just data poisoning but also malware in pickle files.

There are different defenses for preventing these from happening in your environment and the focus shifts to different ones when you get them as they are, and they have not applied any diligence during the creation which requires more extensive validation and less trust. That's why like component vulnerability scanners we start having model vulnerability scanners like ModelScan from Protect.AI for malicious code and Giscard for poisoning.

The logic is applicable to plugins, too. This is why LLM05 is very clearly delineated from LLM07 and the two reference each other. LLM05 is about the defenses we should apply when we use somebody else's components and includes testing for vulnerabilities LLM07, but LLM07 is about the diligence and controls you need to apply when you create plugins to avoid the vulnerabilities.

I would expect a similar delineation with model poisoning and i think this is where the issue is. I believe LLM03 is about data poisoning happening in your environment or environment where you have some visibility whereas LLM05 is about the additional diligence you need to show for externally sourced data. I accept that this is less obvious than models and components which is why it may cause more of a blurring line. However, sourcing external data from a shady supplier has very different risk profiles and provenance requirements from finetuning using your own data.

Finally, given the size and complexity of LLMs, many will rely on model platforms. As the UK's National Cybersecurity Centre (NCSC) recommends, the diligence of the platform's T&Cs and the implications it has on data privacy are essential to prevent data and privacy leaks; especially on whether data are being reused for public model training. This takes a paragraph in LLM05 to highlight it and inform a Security Professional to double-check it. Sure, someone else must have checked T&Cs but hope is not a strategy and the warning is worth preserving. This logic is aligned with that of Overreliance and also API10:2023 and A04:2021; ie we don't assume and just focus on coding-based concerns.

Overall, the entry had a good level of support during the 0.5-> 0.9 phase and had a decent score. I would avoid removing it based on expert opinion and not data.

I would still ask the question,why do third-party vulnerable components deserve a Top 10 item in the main OWASP Top 10 but vulnerable models on Hugging Face, including pickles with malware, do not.

But ultimately we need to also reflect the concerns of the industry. My recommendation would be to monitor this as part of data-driven decisions and revisit it for v2 or later. As always I try to be open-minded and happy to discuss further and I would be interested in what is that this item prevents from being on the list.

Bobsimonoff commented 1 year ago

Correct, yes exactly. This likely won't make the v1.1 minor but I think should be introduced at 1.2 or at latest 2.0 in my opinion

This has interesting implications for Prompt Injection which I could argue is a repetition of The OWASP Top 10 Injection vulnerability and Prompt injection is just another CWE that would point to it. Though, tbh I am not sure we'd get away with dropping prompt injection and just referring to the general Injection topic.

That all said, I am fully onboard with eliminating Supply Chain.

jsotiro commented 1 year ago

@Bobsimonoff please read my previous response on why I think it's a bad idea to eliminate this entry.

Regardless of my opinion, given the support the entry had at the beginning, and the scale of the change, this should be part of our data-driven decision and not expert opinion.

If the consensus from users of our list and practitioners emerges that supply-chain issues can be captured under individual vulnerabilities, then we should absorb the entry into them. I would still disagree but ultimately the list is for its users not the correctness of my views.

We should really drive all this from what helps practitioners build safe LLM apps, not what can be semantically consistent. Otherwise, we get to the conundrums you highlight of whether we should have the injection vulnerability, which of course we should as it is the no#1 challenge practitioners and organizations discuss around LLMs.

rharang commented 1 year ago

+1 for keeping this as a vulnerability -- there are serialization vulnerabilities, there are typosquatted HuggingFace accounts that could have foundation models backdoored via fine-tuning, and there are models trained on data that might not have been adequately controlled (accidentally backdoored, biased, or poisoned). In all of these cases, the core issue is the model itself is potentially not trustworthy.

See the following presentations just from AI Village this year:

On top of that there are risks associated with malicious plugins that are insecure by design (see e.g. https://embracethered.com/blog/posts/2023/chatgpt-plugin-vulns-chat-with-code/) -- if you don't evaluate the source for a plugin, you're looking at supply chain risk again. This isn't a case of prompt injection,

I think there's value in distinguishing between this and training data poisoning in particular, but this seems important to keep around as a distinct vulnerability.

GangGreenTemperTatum commented 1 year ago

To confirm, my intention/request was not for this entry to be removed in v1.1 and requires additional planning and would be fit for v2.0 🙂 Appreciate everyone's feedback here!

Here @jsotiro

Supply chain vulnerabilities are a key aspect in all OWASP Top 10 Lists with different names to highlight the area of concern: A06 (which has gone up in 2021) in the flagship list covers outdated and vulnerable components and no one has suggested that vulnerabilities are vulnerabilities regardless of source and therefore let's remove the entry.

Similarly, in its context, the OWASP Top 10 for APIs has API10:2023 Unsafe Consumption of APIs which talks about the risks of consuming third-party items. I am not sure anyone has argued that A06:2021 or AP10:2023 takes away from the power of their top 10s.

I agree with this statement on both ends, I guess this could be approached as how do we perceive our Top 10 fitting into the community? I was one of my appsec enthusiasts who agree with API10:2023 Unsafe Consumption of APIs but don't feel I need to visit the OWASP Top 10 for API Security to tell me I need to maintain a rigorous supply chain. I hate the SBOM terminology which blew up the market with a buzzword and the reason I personally think this is 🤣 None of this vulnerability

Is our list composed for Security individuals and personnel who aware of this dependency already? If so, we should not waste efforts.. If not and it's for the greater audience, then I am happy.. Again if not, how do we manage each in vulnerability which needs to cover supply chain as a mitigation (I.E Training Data Poisoning = One supply chain avenue is training data, for example), do we cross-reference? (LLM05, LLM07 and LLM10 for example) This is my confusion and overlap between them which lead me to raise this. If we cover each vulnerability which needs to maintain secure supply chain, can we not manage these with each vulnerability and not a unique entry?

Because there are so many avenue's to supply chain in MLOps which can lead to vulnerabilities, do we feel this is the reason it needs to be called out specifically as its own vulnerability? In v1.1 I plan on enhancing the vulnerability for "Training Data Poisoning" to clarify the different types of poisoning, contamination etc., which can relate to different supply chain avenues.

I feel it's up to leadership and everyone involved in this PR to vote on the decision going forward 🙂

Here @rharang

On top of that there are risks associated with malicious plugins that are insecure by design (see e.g. https://embracethered.com/blog/posts/2023/chatgpt-plugin-vulns-chat-with-code/) -- if you don't evaluate the source for a plugin, you're looking at supply chain risk again. This isn't a case of prompt injection,

Again, as before completely agree with you 100%. My stance when raising the GitHub issue is that this was implictly assumed by the LLM application developers and would be to define within the specific LLM entry (in this example, "Insecure Plugins" (LLM07)

Bobsimonoff commented 1 year ago

A few more thoughts... This seems to be boiling down to a few things:

If all of that is true then I have 2 questions

  1. Shouldn't we change this vulnerability to focus on these, right now it is very wide reaching
  2. What possible vulnerability it specific to acquired models and acquired fine tuning yet never with a model you built or fine tunes?

The second item above then makes me wonder if not only the next needs to change, but maybe the title?

GangGreenTemperTatum commented 1 year ago

Had a fantastic discussion with @jsotiro ! Thanks!!

OK, so we have a lot of things in motion here and we always agreed that v1.1 was not going to remove/replace entries.. As per the v1.1 instructions, the improvements for the vulnerabilities are due to go live October 1 2023 which will have significant impact on our thoughts and ideas.

How about we park this until then, and re-visit afterwards? (suggesting we keep the GitHub issue "open", just so we don't lose a visibility - I hate to rely on a reminder is all) - Please 👍🏼 / 👎🏼 or reply to this comment

Thank you everyone for your contributions and feedback! 🙏🏼 🙇🏼

GangGreenTemperTatum commented 1 year ago

Howdy all Please see this comment in #156 for Training Data Poisoning entry:

Keeping this PR open over the weekend for comments and suggestions via a PR review or comment, will merge Sunday PM if no responses or thumbs up overall! If we deem improvements after Sunday, I can submit another PR no problem as we have a week to finalize

If you want to suggest any amendments this weekend, please submit via a PR review or comment or if not can wait until next week, thanks!

Bobsimonoff commented 1 year ago

Ok I've been thinking about this topic. While I'm still think training data poisoning and supply chain have some overlap that we could address, I'm swinging toward thinking supply chain must be kept. Likely 99% of applications built using LLMs will get their LLM platform, model, agents, and/or training data from elsewhere. I need to reread supply chain and training data poisoning to have a concrete recommendation but I'm coming around.

GangGreenTemperTatum commented 1 year ago

I'm swinging toward thinking supply chain must be kept.

Sounds fair and this was a suggestion, I'm not 10000% for removing it with X alternative (I don't know what that even is right now).

However, I do think they are definitely unique and we need to keep Training Data Poisoning regardless too.

Bobsimonoff commented 1 year ago

For documentation purposes, this article helped push me over the edge... https://www.rezilion.com/blog/report-the-risk-of-generative-ai-and-large-language-models/

My takeaway related to this topic has to do with the fact that arguably the use of generative AI is gaining faster adoption than, arguably, any previous software technology. In less than a year we went from a handful of people being aware of LLMs to almost every midsized and larger company having LLM adoption in their roadmap. But at the same time, the security maturity of these packages is likely on par with their age.

Even if we do argue about the specifics in the report, the general thesis rings true to me: We are taking a lightly, at best, security tested software ecosystem into worldwide usage at an unprecedented rate. That IS scary

rossja commented 1 year ago

I think I'd caution against merging these, pending more maturity in the marketplace around how this occurs -- I think this is going to be a real problem "soon", and one that is distinct from supply chain, as more companies roll out iterative/continuous learning on smaller models that they train in-house. If it's your own model, and you are the one training it, on your own hosting platforms, not sure that still counts as "supply chain".

I also think there will be a "time to discovery" factor involved. For example, it may very well be that some of the image gen models that learn from prompts are already "poisoned" (for example, with CSAM material that may have been generated and folded in to training material), but it hasn't surfaced yet.

Bobsimonoff commented 1 year ago

I've come around a bit in my thinking and now agree with @rossja ...

Training Data Poisoning deals with the training Data regardless of whether the training data is completely in-house developed.

Insecure Plugin design, as currently defined deals with natural language inputs from the LLM and the problems/uncertainties there, regardless of whether the plugin is 3rd party or developed in-house.

There are plenty of other things like LangChain, additional plugin issues beyond those defined in insecure plugin design, the LLM platform itself, etc. that still are huge risks IMHO

guerilla7 commented 1 year ago

Apologies lost track of this, did we reach consensus on this? Headed towards v2?

GangGreenTemperTatum commented 1 year ago

Apologies lost track of this, did we reach consensus on this? Headed towards v2?

Yessir v2 label applied @guerilla7

jsotiro commented 3 weeks ago

we have agreed to keep and evolve the entries with clear cross referencing