Closed jmanico closed 1 year ago
I agree. I looked into this earlier:
It primarily reduces impact in the case of a buffer overread (i.e. heartbleed), but it is often impossible to reliably clear memory, and is not worth the hassle in my opinion.
@set-reminder 3 weeks @tghosth to look at this
⏰ Reminder Wednesday, December 28, 2022 12:00 AM (GMT+01:00)
@tghosth to look at this
Perhaps a more general requirement is useful: remove sensitive data that is no longer needed. Similar to 8.3.8:
Verify that sensitive personal information is subject to data retention classification, such that old or out of date data is deleted automatically, on a schedule, or as the situation requires.
But more general for any sensitive data. That would also capture the idea behind 8.3.6, without assuming that an application has explicit control over its memory contents.
So I think that 8.3.8 is quite general and doesn't exactly what fit the intention here.
I agree that 8.3.6 should not be a sweeping requirement but maybe we could make it more specific and make it level 3.
# | Description | L1 | L2 | L3 | CWE |
---|---|---|---|---|---|
8.3.6 | Where a memory safe language is not being used or where the language specifically supports this ability, verify that sensitive information contained in memory is overwritten as soon as it is no longer required using zeroes or random data, to mitigate memory dumping attacks. | ✓ | 226 |
@Sjord @jmanico what do you think about that?
I don't think it makes sense to have different security requirements for different programming languages. The property of memory-safety does not define whether memory can be reliably overwritten. It's hard to reliably overwrite memory in any language, as far as I know.
Do you have a concrete example of which function you should call in which programming language to clear memory?
@Sjord I accept your point, I thought I saw something in rust but I take the point that there are lots of reasons why this wouldn't work. I am going to try and solicit some wider feedback.. :)
So this discussion has two components, and they need to be addressed separately:
Is this a real risk? The short answer is yes. If you are able to access processes memory in any way, you can get at this data. Possible modes of access are, off the top of my head:
How can it be mitigated? This is entirely dependant on the language you are using, and how it manages memory. For systems languages like C/C++ and Rust, you have guaranteed direct access to this memory so you have the ability to overwrite it in-place with nonsense before deallocation. For other languages, one would have to dig deeply into the fundamentals of said language. But generally speaking, the same technique should work for any language unless it relies on copy-on-write mechanics.
For systems languages like C/C++ and Rust, you have guaranteed direct access to this memory so you have the ability to overwrite it in-place with nonsense before deallocation.
This is not strictly true for C, or at least not straightforward.
I disagree that this represents a real risk. Under a reasonable threat model of any system, there is always a line we can draw at which an attacker's capability so great that no realistic protection can prevent them from compromising the system. In the case of full memory access, we are clearly across the line. Evicting memory doesn't make any sense, the attacker can be assumed to read the memory before it's evicted (because why wouldn't they?).
In the case of a buffer overread vulnerability, I think someone would really need to come up with a particularly contrived example to mitigate the vulnerability by evicting a specific secret. In heartbleed, even if you evicted TLS secrets you're still going to have request/response data in memory. I still don't think it makes real, logical sense because at this level of assurance you should instead just recommend not using any memory unsafe language at all.
At a wider level, the presence of such requirements discourages me from using or recommending this methodology because it will create (at best) confusion and (at worst) new vulnerabilities, with an average expectation of wasting the time of both security team and engineers. More specifically, less-experienced practitioners rely on these methodologies to provide expert guidance, so they will follow them without deep introspection/experience on what changes are valuable or not. See https://tprynn.github.io/2022/12/06/cert-pinning-bad.html for more.
I am going to leave this open for some further discussion (which I caused by posting this).
Let's see where it gets to.
@set-reminder 2 weeks conclude on this issue
⏰ Reminder Monday, January 2, 2023 12:00 AM (GMT+01:00)
conclude on this issue
For systems languages like C/C++ and Rust, you have guaranteed direct access to this memory so you have the ability to overwrite it in-place with nonsense before deallocation.
This is not strictly true for C, or at least not straightforward.
Github needs a 😮 reaction. I never thought that a compiler would just remove the code that zeros out memory. Thank you for that!
I was actually thinking of a non-C specific option. If we take the example of a secret string, we could loop through the individual characters in the string, overwriting them with single character. Or better, randomly generated characters. It's obviously not as efficient as a memset, but the average secret is relatively small so the performance hit would be inconsequential. Maybe the average compiler would not optimize that out...
But even then my idea wouldn't be enough anyway, so I guess I'll shut up. https://www.daemonology.net/blog/2014-09-06-zeroing-buffers-is-insufficient.html
I still don't think it makes real, logical sense because at this level of assurance you should instead just recommend not using any memory unsafe language at all.
Is that an option? :) But I cede to your point. It introduces significant complexity while not necessarily benefiting the overall security posture.
Thank you for indulging me!
If someone is reading the ram of your server it’s already game over and it’s not possible to zero out memory for many languages with VM’s.
I strongly advice we drop or do not set this requirement, it’s not possible to achieve and it doesn’t solve a real risk.
🔔 @tghosth
@tghosth to look at this
🔔 @tghosth
conclude on this issue
Consensus seems to be to remove this requirement as it is not practically possible in most cases. Opened #1497
@tghosth we can offer using SecureString in .Net or GuardedString in Java to achieve this goal. Also, if someone dumps the memory, they can use something like Volatility to fetch all secrets.
SecureString Class (System.Security) | Microsoft Learn:
We recommend that you don't use the SecureString class for new development on .NET (Core) or when you migrate existing code to .NET (Core). For more information, see SecureString shouldn't be used.
Yeah I am leaving this closed, if you read through the issue there are a lot of issues with doing this
Several of my students believe that this is not a reasonable requirement that actually helps security or is a real vulnerability.
I suggest we remove it.