open-quantum-safe / liboqs

C library for prototyping and experimenting with quantum-resistant cryptography
https://openquantumsafe.org/
Other
1.75k stars 426 forks source link

Define threat model for liboqs #1840

Open dstebila opened 1 month ago

dstebila commented 1 month ago

We should have a document describing the intended threat model that liboqs aims to be secure against. This would include issues such as constant time behaviour, and what is in or out of scope, such as side-channel attacks, fault attacks, etc.

SWilson4 commented 1 month ago

@praveksharma and I had a preliminary discussions about this with @romen, an OpenSSL committer and OTC member. I'll attach the notes from our call.

SWilson4 commented 1 month ago

July 12, 2024 - Nicola/Pravek/Spencer

Background

Nicola Tuveri (@romen on GitHub) is a doctoral researcher at Tampere University, with a focus on side channel attacks. He is also an OpenSSL Committer and a member of the OpenSSL Technical Committee (OTC). Earlier this year, he made an academic visit to Waterloo. Douglas, Pravek, and I had the pleasure of getting to know him and having conversations about his research, OpenSSL, and OQS.

In recent discussions about a threat model for liboqs, the OpenSSL threat model has been mentioned frequently, particularly in contrast with WolfSSL's threat model. OpenSSL considers fault attacks (and generally physical attacks) to be out of scope, whereas WolfSSL does not.

I asked Nicola if he could provide insight into the rationale behind OpenSSL's decision to exclude fault attacks from their threat model. I also solicited general advice and guidance on developing OQS threat model(s). He graciously set up a call with Pravek and me to discuss these matters. This document is based on the notes I took during that call.

Disclaimer

Nicola made it clear that he was speaking in a personal capacity, not on behalf of OpenSSL. Everything here is based on his opinions and speculation and should not be considered to be representative of OpenSSL in any way. Additionally, although the OTC is in charge of the security policy, it inherited an existing policy when it (the committee) was created. Nicola was not involved in the process of creating the security policy and indeed would have had a conflict of interest, as his academic research involves attacks that fall outside the threat model.

OpenSSL Security Policy

OpenSSL's rationale in excluding fault attacks and other attacks that require same physical system access goes back to a larger discussion about user protection vs user responsibility. The position of OpenSSL developers has shifted over time. Originally (going back to 1998), their stance was that OpenSSL was a toolkit with which users could do whatever they liked, and they should not get in the way of experimentation. As OpenSSL became a critical component in many production systems, its devs pivoted toward protecting users more, with the burden of maintaining legacy compatibility.

Attacks that require local access were excluded because they typically require gaining elevated privileges, sometimes to the point of compromising the application running OpenSSL. In these cases, there is often an easier and more direct attack vector than targeting OpenSSL. OpenSSL's reasoning here is similar to their reasoning for considering attacks resulting from parse errors in trusted (e.g., private key) files to be out of scope: if a user cannot trust their own private key file, than something has gone horribly wrong long before hitting an OpenSSL parsing bug.

When OpenSSL receives a disclosure of an out-of-scope attack, they will typically not issue a CVE themselves. They sometimes direct the reporter to another vendor (e.g., Intel) which may issue a CVE. For out-of-scope attacks, OpenSSL is not required to include hardening measures, but they may do so anyway after considering impacts of the changes on performance, usability, etc. Generally, mitigations are included, sometimes as non-default options, unless there is a significant impact to performance, backward compatibility, etc. This applies when the reporters include a suggested mitigation: sometimes neither this nor a proof-of-concept exploit is provided. OpenSSL typically requests proof-of-concept exploits and countermeasures if they are not included in the initial report.

Sometimes, a local physical access attack does not require elevated privileges. For example, in a cloud data centre an attacker might obtain a VM on the same machine that is running the target application. When a reporter argues that such an attack is possible, OpenSSL typically asks for a proof of concept to demonstrate the feasibility of a remote exploit.

Comparison With WolfSSL

Nicola speculates that WolfSSL took a different approach to same-physical-system attacks because their targets included embedded and resource-constrained systems. Certification and validation on these systems often requires resistance to such attacks. Additionally, WolfSSL is often used in firmware, without an OS, as opposed to OpenSSL, which is exclusively intended for software with a full OS. Same-physical-system attacks are often viewed as targeting the OS first and OpenSSL second.

Notes for OQS

Nicola suggested that OQS would need separate threat models for each project, which might refer to each other. For example, oqs-provider might have its own threat model for the "glue" between liboqs and OpenSSL, but its overall threat model would be limited by the liboqs threat model. Many security reports for oqs-provider would likely be forwarded to liboqs. Similarly, some reports to liboqs would be forwarded to non-OQS projects such as PQ-Crystals.

Nicola also suggested that it might be worth collecting documentation for the threat models of various upstream sources and presenting this information in liboqs instead of making general claims about all of the liboqs source code. He also noted that, regardless of our intentions, liboqs source code may end up in embedded systems simply because it is the most well-known and easy to find source for post-quantum cryptographic code.

Finally, Nicola expressed a willingness to sit in on future OQS threat model-focused meetings if it would be helpful for us, conditional on his availability.

tomato42 commented 1 month ago

My professional opinion is that for a software implementation we need something between OpenSSL and WolfSSL Security Policy.

Virtualisation is a fact of life, and having secret data leakage that can be deduced from a different VM running on the same system is a valid attack. OTOH, fault attacks are really attacks on hardware, not on software running on it, which means that any mitigations are very hardware dependant (good example is Rowhammer, where only certain combinations of RAM sticks, motherboards, and CPUs would be vulnerable; like a RAM stick vulnerable on one platform would be fine in another).

At the same time, I haven't yet seen a microarchitectural or power side-channel attack on an implementation that was shown to not have a timing side-channels first (to be clear: I'm not claiming power side-channels don't exist, but the research in this area isn't exactly stellar: showing presence of a power side channel when the implementation has a timing side-channel isn't exactly hard...).

So, I'm of the opinion that just eliminating timing side-channels should give us implementation that is good for vast majority of use-cases.

anvega commented 1 month ago

I'd like to offer some perspective on how we approach threat modeling and related security assessments in the CNCF community, developed with @JustinCappos from NYU, which might be helpful as you consider your approach.

Our approach involves:

  1. A self-assessment by project maintainers, covering:
    • Project design goals and scope (similar to the discussion above on what's in and out of scope)
    • System actors and actions
    • Potential risks in design and configuration implementations
  2. Review by third-party security volunteers to provide additional perspectives and validation (We maintain a roster of security experts for this purpose. This review is similar to the engagement of a code audit but focuses more on overall architecture and design principles, rather than being tied to a specific release. It provides a broader architectural assessment that'd expect from a threat model.)
  3. Then the joint group of project team and security volunteers create lightweight threat models and attack matrices based on this collaborative process

This approach balances thoroughness with practicality. We've documented it in a guidebook (https://github.com/cncf/tag-security/blob/main/community/assessments/Open_and_Secure.pdf) and guidelines (https://github.com/cncf/tag-security/blob/main/community/assessments/guide/self-assessment.md). To date, 35 projects have gone through this process or collaborated with us on it.

Examples of outcomes:

This approach, or elements of it, could potentially be adapted for liboqs and oqsprovider. As OQS is part of the PQCA, this might present an opportunity for collaboration between maintainers of other Linux Foundation projects, leveraging security practitioners' expertise.

Happy to discuss further in an upcoming meeting.

baentsch commented 1 month ago

Thanks for sharing, @anvega ! Very interesting and I'd second this

This approach balances thoroughness with practicality

When I'm back from my traveling sometime towards the end of next week, I'll get going with the self-assessment part for oqsprovider and then touch base with you and if only to get a chance for

an opportunity for collaboration between maintainers of other Linux Foundation projects

also checking whether @planetf1 's statement ""open source is 'scratch your own itch'" is really all there is to FOSS within LF.

ydoroz commented 1 month ago

In case of WolfSSL, I got some information from Anthony Hu. He mentioned that they don't have a document where they list which attacks are included or not in their threat models. They point me into a pdf in which they received it from Trial of Bits. It is a threat modeling exercise for the curl project which lies under the umbrella of wolfSSL's products. It is publicly available: https://curl.se/docs/audit/threatmodel-2022.pdf.

Although it is not an exact fit on what we are looking for, we can go over the document and use some of the approaches that are listed there. Creating Threat Actors, Possible Attack Vectors, Severity Levels, and such.