ossf / wg-security-tooling

OpenSSF Security Tooling Working Group
https://openssf.org
Apache License 2.0
297 stars 52 forks source link

Tooling for cryptography: a simple but effective approach? #41

Open py0xc3 opened 2 years ago

py0xc3 commented 2 years ago

I think security awareness is a big issue that has to be considered: I am not worrying about developers who are no security experts, but developers who are no security experts and who are neither aware of nor interested in that fact. This makes them not seeking any security tooling at all, and the best tool cannot do its job if no one seeks and thus, uses it.

What type of tooling are developers seeking, if they have no security awareness at all? They only seek the tooling they need to develop the features they want to develop. So the security tooling has to be already integrated into these and at the best, activated by default.

Concerning cryptography, a simple "informative warning" tool can be integrated into compilers or IDEs to foster security (and awareness): it remains an existing problem that base64 is assumed to be cryptography, and md5/sha1 remain in use for cryptographic purposes as well (just examples). A compiler or an IDE could output a warning such as "hey, there is something with base64 in your code, I hope that is not a cryptographic use case?" or "be aware that md5 and sha1 should be avoided if this use case needs cryptographic security", or at least suggest hmac for sha1. If a known implementation of a cryptographically-secure algorithm such as SHA256 is imported and used without any iteration, it might raise a related warning, too. And so on. This can already facilitate to avoid these issues in some places, and maybe raise some security awareness.

Your tools have to be put in place where such developers look for tools (they look for compilers, interpreters, IDEs, ...), while the reasoning of these developers (or of those who set up environments such as that Solarwinds was using) tends to not end up at dedicated security tools (I was reading the very interesting comment of David about Solarwinds :)

So this is partly less a computer science but a social science problem, including the question why such an "unaware reasoning" has developed and remains competitive in many environments (and how to make security tools comparably competitive within)... Of course these are no dedicated open source problems but more generic ones.

lirantal commented 2 years ago

I genuinely value this ask and totally understand where you're coming from, but if the compiler always warns on base64 you regardless of its actual use, then it ends up being false positives, cry wolf, and other cases which makes users end up ignoring it.

Would we want to approach it differently than just compiler/run-time messages?

py0xc3 commented 2 years ago

Generally, it were just examples to illustrate the point. I absolutely understand your argument. A realistic compromise would have to be somewhere in between.

However, to stick for a moment with the base64 example: On one hand, a developer who uses base64 for crypto use cases does this not by intention but because of a lack of knowledge and awareness. If we can make this developer to read this warning message just once, the job can be already done: deciding to ignore the message in future, or to turn off the message, already implies that the information of the message was processed in the developer's mind. As funny as it sounds, in many cases, this is already a step forward. It is the reasoning that has to be evaluated before solutions can be found: what is the reasoning of someone who sets up build environments that make use of unencrypted TFTP for critical data on the Internet, or uses base64 where crypto is necessary?

On the other hand, further conditions can be added before a warning is provoked. I agree that base64 itself should not be the sole condition before a warning is output. Nevertheless, I am myself not sure if the outcome of this specific approach would justify the efforts. As I said, it is just an example to make a bit aware that just developing good security tools for unaware developers does not automatically end up in these developers deploying them. Having good tools is not sufficient, critical is to bring these tools to those who do not know that they need them.

Alternatives are integration in what such developers look for, or making their tools more security by default. Of course this is not just about development or cryptography. libvirt-based virtual networks are widespread and can be involved directly or indirectly in many use cases, but usually they are by default vulnerable to spoofing because filters have to be put in place manually (which means someone has to be aware of the issue). This can be easily mitigated by enabling existing pre-defined filters by default. Obviously, this is one of many examples of good (and imho even easy-to-deploy) tools that already exist but that remain often unused. So there is also a lot of superficial what can be done, without developing something dedicated, whereas the critical problem is always the same: (how to) bring it to those who need it, maybe without knowing their need themselves. My point in general is not a technical one :)

Would we want to approach it differently than just compiler/run-time messages?

I am open to every incentive. Do you mean anything in particular?

joshbressers commented 2 years ago

I think this problem is very very hard, but it seems easy.

There are two pieces to this, the technology and the policy.

The technology is the easy part. We need a static analyzer to flag certain algorithms as bad. This is not unlike detecting and preventing strcpy use in modern compilers and static analyzers. It might make more sense to start documenting how various tools can detect the functions/methods/libraries in question to figure out the gaps. Just knowing how to detect unwanted algorithms is extremely helpful even if you have no policy in place, by itself such an effort would have value.

The much harder part is policy. Assuming we can properly detect usage of the cryptographic features, now we have to find a wya to understand what we can or can't use. My policy may differ from your policy in incompatible ways. For example I've seen orgs that outright have banned md5, and some just say "no md5 for security functions". md5 would be easy to blanket ban as an industry because it's so terrible, but there are many algorithms, key sizes, and libraries that aren't necessarily good or bad.

I'm not convinced the OpenSSF should be creating policy suggestions such as this, I could be swayed by a good argument. If we ever did to this, the Best Practices Working Group would be the right home.

And finally, bringing the two together, I think the policy isn't effective without the technology. It's very easy to create policy that's unenforceable. We could create a list of unsafe hashing algorithms (there are several of these already), but if nobody can figure out if they are following a policy, is it really a policy?