coreinfrastructure / best-practices-badge

🏆Open Source Security Foundation (OpenSSF) Best Practices Badge (formerly Core Infrastructure Initiative (CII) Best Practices Badge)
https://www.bestpractices.dev
MIT License
1.22k stars 202 forks source link

Be careful of dynamic assertions #1416

Open david-a-wheeler opened 4 years ago

david-a-wheeler commented 4 years ago

Jeffrey Walton (Baltimore, MD, US) posted the following message with subject "Be careful of dynamic assertions". I thought he raised some important points, so I'm posting this as a GitHub issue to make it more visible, and then I intend to reply - David A. Wheeler

======================

Hi Everyone,

Thanks for the service. I really like the idea of a mini Security Architecture document so projects can discuss their governance, policies and procedures. I've written a lot of them over they years and I know they are helpful.

I recently added Crypto++ to CII. Also see https://bestpractices.coreinfrastructure.org/en/projects/3806.

Under "Dynamic code analysis", one of the recommendations is:

It is SUGGESTED that the software produced by the project include many run-time assertions...

As a C/C++/ObjC developer when I see "run-time assertions" I think of Posix assert (https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/assert.h.html). Runtime assertions in production software could be bad for several reasons.

Let me preface it with: runtime assertions are not dangerous at development time. At development time they aide the programmer by snapping the debugger. Asserts create self-debugging code in this instance. I embrace asserts at development time.

Runtime assertions are not dangerous to some software in production, like music players and video players. An assert may annoy a user, but the app is not handling sensitive information so it is just another UI bug.

However, high integrity software must be careful of assertions in production. Here are the reasons given for Crypto++ in the CII report.

\ The library never asserts in production for four reasons. First, it is the application's authors decision to crash their app. The library does not make policy decisions for the application author.

Second, some platforms, like Apple iOS, do not allow asserts to crash an application because it degrades the UI experience. The library will not cause an author's app to be rejected from an App Store.

Third, the library handles sensitive information like private keys, shared secrets and passwords. When an assert fires the core file will include the sensitive information. That means the sensitive information has been egressed outside the application's security boundary. Folks with access to the mobile device or a computer paired/sync'd with a mobile device will be able to recover the secrets from the filesystem.

Fourth, the core file may be shipped to an Error Reporting Service. Now Apple, Google, Fedora, Red Hat, Ubuntu or Microsoft have the user's private keys, shared secrets and passwords. Then the information is passed onto the developer who has the user's private keys, shared secrets and passwords.

Fifth, when an assert crashes a server, it (1) may preserve data Integrity at the expense of (2) Confidentiality of the data and (3) Availability of the server. If an author wishes to preserve Integrity, they merely need to call exit(1) without the loss of Confidentiality. \

In fact, when I was working as a Security Architect in US Financial we would reject vendor apps that used asserts in production. There was too much financial and reputational risk with using asserts. The firm did not want to pay a regulatory fine or suffer the reputational harm because customer SSNs were not handled properly.

Also see "GMP and assert()" on LWN at https://lwn.net/Articles/780817/. It caused a shit storm when posted to OSS Security mailing list.

david-a-wheeler commented 4 years ago

I think you raise excellent points. Please note the text:

It is SUGGESTED that the software produced by the project include many run-time assertions...

... does not require that run-time assertions be enabled during production. As you've noted, there are both pros and cons to enabling assertions in production, and there are many cons that depend on the language, intended use, and so on. The intent was that these assertions could be enabled during dynamic analysis (e.g., while running tests and fuzzers).

Perhaps we should add a clarifying detail that we do not necessarily suggest that assertions be enabled during production; that is entirely up to the project and its users to decide. The purpose for this suggestion is for there to be assertions available for use during dynamic analysis (e.g., testing and fuzzing) before the software is put into production.

What do people think?

noloader commented 4 years ago

Hi David.

My two cents on it...

Classify the software, program or library. Painting with a grossly broad brush, there are two classifications: (1) high integrity software and (2) non-high integrity software. High integrity software handles sensitive information, while non-high integrity software does not.

The second classification, (2) non-high integrity software, is easy. It includes software like calc programs, music players and video players. Do whatever you want because it does not matter. The software does not handle sensitive information so there is nothing to leak.

The first classification, (1) high integrity software, is tricky. It includes software like Botan, Crypto++, GnuTLS, GnuPG and OpenSSL. This software is expected to handle sensitive information. Let the debate begin...

And to throw a wrench into the works, a library like the XML parser Expat, looks pretty boring. Except that it is used in high integrity software like Git, GnuTLS, NSD and Unbound. NSD is a replacement for BIND, so it is holding some high value keys.

(Maybe the Expat/NSD use case begs for a Key Wrap algorithm for in-memory sensitive information).


In US Financial and US DoD, things are actually a little different. In US Financial and US DoD we evaluate the app based on data sensitivity levels and the firm's security policies or STIG policies. The evaluation was called a Security Architecture (SecArch) review, and the goal was to reduce risk due to financial or reputational harm to the firm in the case of a negative event.

The data is classified, like Low, Medium and High sensitivity. Low value data is data like a customer's username and password. Medium value data is data like a customer's social security number or a firm's proprietary trading algorithms or makeup of a financial instrument. High value data includes executive compensation, pending litigation, mergers and acquisitions, etc.

Given a data sensitivity level we had to place security controls for an application (or ensure a vendor's app had the appropriate security controls in place). For example, a sales person might have an app on a tablet to show off a financial instrument's historical performance. The data is low value (historical performance without EFT makeup), so only a username and password is required to authenticate the sales person. Sessions could last 24 or 72 hours before re-authentication.

As another example, the "boardpad apps" were the worse. The boardpad apps were the apps the executives wanted to use for paperless board meetings. Executives wanted to put the firm's high value data on a tablet protected only with Apple's PIN code...

The firm required two factor authentication for high value data. The firm's security policy also specified a user's session would expire after 15 minutes of inactivity when handling high value data. It was challenging to get a boardpad app in shape (and the executives always complained about usability).

One year (back in 2012 or so) one of the bank's executives bought iPads for all the board members. The executives promptly put the firm's high value data on their iPads with only an Apple PIN code. And then two of the executives lost their iPads on the subway train. They came to us asking how we could remote wipe unmanaged iPads...

noloader commented 4 years ago

And one more point since you brought this up:

It is SUGGESTED that the software produced by the project include many run-time assertions...

... does not require that run-time assertions be enabled during production.

Another thing that is not said is, run-time assertions should not be be enabled during production. By default forks like Debian and Red Hat do not define NDEBUG, so asserts are in effect for the distros. Users get them when using distro provided binaries whether they want them or not.