OWASP / ASVS

Application Security Verification Standard
Creative Commons Attribution Share Alike 4.0 International
2.6k stars 637 forks source link

Large Language Models (LLM) in the ASVS #1730

Open ImanSharaf opened 9 months ago

ImanSharaf commented 9 months ago

(Ed note, original issue title was: Prevention of Prompt Injection in Applications Using Large Language Models (LLM))

The popularity of Large Language Models (LLM) like GPT variants from OpenAI has been on a rise, giving applications the power to generate human-like text based on prompts. However, this has opened up potential vulnerabilities. Malicious users can manipulate these LLMs through craftily structured inputs, causing unintended actions by the LLM. These manipulations can be direct, where the system prompts are overwritten, or indirect, where inputs are manipulated from external sources.

I believe that applications that are using LLMs must validate and sanitize all input prompts to prevent any malicious or unintended manipulations.

elarlang commented 9 months ago

Well, the topic is on a rise, I agree.

But how do you actually verify that there is no prompt injection? Is the requirement actionable or is it more on the "make it all secure" side?

jmanico commented 9 months ago

If an attacker is able to manipulate an AI system into unauthorized data access or other malicious activities by directly interacting with the raw AI prompt, this is known as prompt injection. Mitigating this risk involves multiple layers of security controls:

Strong Input Validation: Implement rigorous validation mechanisms, potentially leveraging another AI model trained specifically for this purpose, to scrutinize incoming prompts.

Model Hardening: Enhance the resilience of the AI model to withstand malicious or malformed inputs, possibly through techniques like adversarial training.

Attribute-Based Access Control (ABAC): Employ fine-grained access controls based on attributes like user role, data sensitivity, and context to restrict who can interact with the AI system and how.

Parameterized AI Calls: Similar to using prepared statements in SQL to prevent SQL injection, use parameterized calls to the AI model to segregate user input from the command logic, thereby reducing the risk of injection attacks.

tghosth commented 9 months ago

My initial thought is do we want a specific LLM section in the V5 Validation, Sanitization and Encoding chapter?

Or do LLMs need their own *SVS?

I bet @danielcuthbert has opinions about this :)

ImanSharaf commented 9 months ago

Given our objectives, if we're primarily focusing on applications that integrate LLMs (especially when the LLM engine interacts with sensitive data like backend databases), it would be fitting to embed LLM-related guidelines within the ASVS framework. This approach provides a holistic view of application security, encompassing all components including LLMs.

However, if our emphasis shifts to the security and intricacies of the LLM itself, independent of its application context, it's advisable to create a dedicated *SVS. This would allow us to delve deeper into the unique challenges and security nuances of LLMs as standalone entities.

Thus, our decision should align with our primary objective and the depth of scrutiny we wish to apply to LLMs.

EnigmaRosa commented 8 months ago

I definitely see there being benefit in keeping in mind applications that integrate LLMs as opposed to the LLM itself. We've been getting a fair number of question lately on the security of applications using LLMs, and I see that becoming even more common.

GangGreenTemperTatum commented 8 months ago

Or do LLMs need their own *SVS?

I concur and was hoping this could be achieved (Slack context here), but I'd explicitly label this as "LLM Applications *SVS" which i'd love the ability to collaborate on this.

My reasoning is that security standards on building, monitoring and securing LLM's may overlap with LLM Applications but is ultimately at a high-level completely different. Whilst their vulnerabilities related to both scenario's, their mitigations and remediations are mainly unique. An LLM Application can encompass all of the factors with a general web application that sits as an overlay wrapper. I.E, also assuming the LLM is not well versed, your essentially adding an entity layer which can make decisions, deploy functions or code and take other actions within your environment.

One example why I think these should be split into two different frameworks: (which I mentioned in BugCrowd LLM AppSec VRT thread, I requested from them for NLP or LLM-based programs)

There are already vulnerability DB's, frameworks and|or security best practices for MLSecOps, not related to LLMAppSec, I.E:

https://avidml.org/ https://huntr.mlsecops.com/

I'd say that MITRE ATLAS is the closest to what is required from the community, it does not "show an ML developer how to build a secure LLM application" as compared to a web application developer would use the classic ASVS. Within the OWASP Top 10 for Large Language Model Applications, one thing I am working on for v2 is mapping CWE's and CVE's along frameworks within an LLM Application environment (WIP GitHub issue here).

jmanico commented 7 months ago

I think a new AI section is fundamental for ASVS 5.

Suggestions:

Verify Secure Data Handling in AI Models

  1. Verify that the AI model securely handles sensitive data, ensuring that personal data is processed and stored with appropriate privacy and security controls.
  2. Verify that data used for training AI models is anonymized or pseudonymized where feasible, to minimize privacy risks.

Verify Robustness of AI Models Against Adversarial Attacks

  1. Verify that AI models are robust against adversarial inputs designed to manipulate or mislead the model, including the use of techniques like adversarial training.
  2. Verify that appropriate measures are in place to detect and mitigate adversarial attacks in real-time during the model's deployment.

Verify Transparency and Explainability of AI Decisions

  1. Verify that the AI system provides transparent and understandable explanations for its decisions, especially for decisions that have significant impact on individuals.
  2. Verify that there are mechanisms to review and contest AI decisions, ensuring accountability and fairness.

Verify Compliance with Ethical Guidelines and Regulations

  1. Verify that the development and deployment of AI systems comply with ethical guidelines and relevant legal regulations, including data protection laws.
  2. Verify that the AI system incorporates mechanisms to prevent bias and ensure fairness and non-discrimination in its decisions.

Verify the Integrity of AI Model Training and Updates

  1. Verify that the integrity of data used for training and updating AI models is maintained, preventing unauthorized or malicious modifications.
  2. Verify that secure and verifiable processes are in place for updating AI models, ensuring that updates do not introduce vulnerabilities.
EnigmaRosa commented 5 months ago

I'll be honest, I don't know how relevant the suggestions proposed by @jmanico are to the ASVS - they seem like they would be best appropriate for an AI-specific SVS. I think our goal should be addressing LLM integration, not a model itself. I absolutely agree that there should be some AI *SVS, but adding it as a section to the ASVS does not feel like the correct answer.

That being said, some feedback on the suggested requirements: primarily, we should be sure that controls actually exist for the requirements put forth. It would be disingenuous to have a security requirement that can't actually be met. I say this because I know it can be incredibly difficult to protect a model from adversarial attacks (from my understanding, this requires a defense in depth approach), but I'm also not an expert here. I think it would also be appropriate to add a requirement to protect against extraction attacks to copy the AI model in use, but that may be covered by the "detect and mitigate adversarial attacks" requirement.

tghosth commented 5 months ago

Yeah I think that especially based on what @GangGreenTemperTatum said, I am not convinced that a full section on LLMs is warranted for ASVS. On the other hand, maybe we could include some guidance in an appendix which I know we sort of did in the past for IoT?

@jmanico what are your thoughts on an appendix? Do you have more items you would add? I would suggest that we change the items in your original comment above to be more like considerations rather than verification requirements as I agree with @EnigmaRosa that we need to be careful about whether practical solutions exist to some of these considerations :)

jmanico commented 5 months ago

An appendix works for me 🤙

ImanSharaf commented 5 months ago

An appendix works for me too.

tghosth commented 4 months ago

Okay so we now have a place to put appendix content with an intro that I based on what @ImanSharaf wrote so who wants to add content :)

https://github.com/OWASP/ASVS/blob/master/5.0/en/0x98-Appendix-W_LLM_Security.md

ImanSharaf commented 4 months ago

Thank you, I will do that.

jmanico commented 1 month ago

I would also add a requirement around natural language validation is fairly essential. I see folks using a smaller AI engine (with no user data) to do natural language validation.

tghosth commented 3 weeks ago

I would also add a requirement around natural language validation is fairly essential. I see folks using a smaller AI engine (with no user data) to do natural language validation.

@jmanico can you open a PR in the new appendix for this?

jmanico commented 3 weeks ago

I added a few other AI Security requirements in the PR attached to this issue.

tghosth commented 3 weeks ago

I have made this non-blocking. Jim has PR'd in some content and @ImanSharaf it would be great to get some extra content from you as well.

ImanSharaf commented 3 weeks ago

@tghosth Should we talk about this package hallucination attack in the appendix too?

Also, what do you think about this check Ensure that any personal data processed by the LLM is anonymized or pseudonymized to protect user privacy.

tghosth commented 2 weeks ago

@tghosth Should we talk about this package hallucination attack in the appendix too?

Maybe in output filtering?

Also, what do you think about this check Ensure that any personal data processed by the LLM is anonymized or pseudonymized to protect user privacy.

I think we can suggest that too

jmanico commented 2 weeks ago

@ImanSharaf, if you help me craft requirements for these two issues, I'm happy to add them to the work done here. https://github.com/OWASP/ASVS/blob/master/5.0/en/0x98-Appendix-W_LLM_Security.md

ImanSharaf commented 2 weeks ago

@ImanSharaf, if you help me craft requirements for these two issues, I'm happy to add them to the work done here. https://github.com/OWASP/ASVS/blob/master/5.0/en/0x98-Appendix-W_LLM_Security.md

For the hallucination attack, the target is typically the developer. It's crucial for developers to approach results from Large Language Models (LLMs) with skepticism, as these models can inadvertently generate vulnerable code or recommend non-existent packages. Attackers might exploit these suggestions by creating malicious packages mirroring the hallucinated results. Developers should always verify and validate LLM outputs before implementation to mitigate these risks. I don't know where we should put this check. @jmanico @tghosth

jmanico commented 2 weeks ago

If the check is AI specific then I suggest we just add it to a new section on secure code generation in the AI appendix

ImanSharaf commented 2 weeks ago

Does it work? The organization must ensure that all outputs from Large Language Models (LLMs) used during the software development process, including but not limited to code suggestions, package recommendations, and configuration snippets, are subject to verification and validation by developers before implementation.

jmanico commented 2 weeks ago

How about:

Verify that all outputs from Large Language Models (LLMs) used during the software development process, including code suggestions, package recommendations, and configuration snippets, are subject to verification and validation by developers before implementation.

Hey @tghosth do you like this? If so I'll go PR.

tghosth commented 2 weeks ago

@jmanico sure go for it, which section in the appendix?

jmanico commented 2 weeks ago

W.2 Output Filtering for now. I may move this to a AI Code Generation section later! PR Submitted!

ImanSharaf commented 2 weeks ago

How about:

Verify that all outputs from Large Language Models (LLMs) used during the software development process, including code suggestions, package recommendations, and configuration snippets, are subject to verification and validation by developers before implementation.

Hey @tghosth do you like this? If so I'll go PR.

Looks good!

tghosth commented 1 week ago

W.2 Output Filtering for now. I may move this to a AI Code Generation section later! PR Submitted!

Merged!