Closed misterdjules closed 5 years ago
Yes, see my presentation to TC39, Security Implications of Error.prototype.stack.
You're right that it is possible to work around this failure by converting the algorithm to be iterative. Similarly, I've demonstrated that with enough care, someone can avoid exposure of their source text through Function.prototype.toString
. But the existence of those introspection APIs and their security consequences are not widely known. This proposal adds a language feature to opt out of those confidentiality- and encapsulation-breaking features to make it much more likely for programmers to create secure software.
Thanks for the additional context! Are those examples from those slides theoretical examples, or are there concrete examples of code in use today that is vulnerable to this and for which writing it in using an iterative algorithm is not desirable?
@misterdjules The code from those slides was contrived. I've not searched for occurrences of information leaks in the wild since it's not possible to determine when a programmer intended to achieve confidentiality for some value. Also, all recursive algorithms can be converted to an iterative algorithm. But then the programmer would have to be aware that the particular kind of recursion they were doing was risky in the first place. The "sensitive"
directive is intended to be a single opt-in so that programmers do not have to be aware of these gotchas and are more likely to avoid unintentional information leaks.
@michaelficarra OK, I was trying to understand where on the spectrum between "only a handful of libraries will use it" and "a significant number of them will use it" the "sensitive" directive might land.
Reading the FAQ entry at https://github.com/tc39/proposal-function-implementation-hiding#will-everyone-just-end-up-using-this-everywhere and your comment just above, I don't think I can get an intuition for that.
For a feature that potentially removes useful information from stack traces, it makes it difficult for me to determine how much it would impact observability use cases that wouldn't be served by using privileged debugging tools (e.g. sending Error strack traces to servers from an application running in a browser).
It seems having examples of concrete and non-contrived use cases for which using that directive would be recommended would help both evaluate the impact of that directive on observability and communicate to JS users about the potential trade-offs in using it.
This proposal mentions the following:
Do you have specific examples of algorithms or libraries that are vulnerable to this specific problem? This is probably a naive question, but if recursive calls are the main problem in this case, would implementing the same algorithm as an iterative one be enough to work around it?