Open ghermeto opened 5 years ago
In the same way that you do not want to write your users' passwords to your HTTP logs, you also would not want to write stack traces from functions specifically marked as "sensitive"
to your error aggregator. The reason they're marked as "sensitive"
is that they expose information about the values they're working with through their calling behaviour.
Changing the Function.prototype.toString
result of a function (from including [native code]
to not including it) could also change the behaviour of your application given the popularity of inspecting its output today. A switch for that would very likely be unpopular among vendors.
Tools like Sentry can still be used in the presence of these directives. In fact, they will most often be improved by allowing the user to better separate their code from library/shim code.
What happens when the error itself originates within the function that itself, as well as most of its internals, are marked as sensible? How will it show inside the stack trace?
Consider someone incorrectly implementing a polyfill (with this trivial example):
function internalFunction() {
"sensitive"
throw new Error('whoops')
}
function somePolyfill() {
"sensitive"
if ('some' in window) return;
window.some = function some() {
return internalFunction(...Array.from(arguments))
}
}
Without being able to disable it, it'd be very hard, if not impossible in some scenarios to find the root source of the problem.
Also for things like Sentry, that can rely on a specific order of frames (we don't know, as we can use function names, but we used to do this in old versions, so I'm sure there's a code in the wild that still does that), it can create some incorrect results if there'll be no way to determine whether some of the frames are hidden or not.
Function based: https://github.com/getsentry/sentry-javascript/blob/22a2a4e42f59fa83052d71ad53c1786f5596d526/packages/browser/src/parsers.ts#L80-L112
Frames order based: https://github.com/getsentry/sentry-javascript/blob/22a2a4e42f59fa83052d71ad53c1786f5596d526/packages/raven-js/src/raven.js#L1668-L1688
The option to remove the directives exists for authors who serve the code — even if present in third party libraries, they can stripped if desired. But it’s rather important that if your code doesn’t control the sensitive code in question, you can’t choose to disable its security features.
There will be bugs in the kinds of very rigorous polyfills that would be interested in this behavior, just as there are bugs in browser internals that those polyfills are emulating. One doesn’t get a JS-exposed trace for C layer errors either, though, and whether e.g. a platform-defined built-in module has had its functionality implemented in JS or C or Rust should not be observable.
One doesn’t get a JS-exposed trace for C layer errors either, though, and whether e.g. a platform-defined built-in module has had its functionality implemented in JS or C or Rust should not be observable.
True, however, you, like a developer can control one and cannot the other.
You can’t, in that example (platform-defined built-in modules). They’re part of the browser, they just might be implemented in JS even if not falling back on a ‘polyfill’ (the term gets blurry here, since the polyfill can be an entirely legit implementation itself).
There are other scenarios where this can occur too, like when your code runs in a sandbox within another application. In any case where you can control it, though, you can also strip it.
Possibly notable that minifiers today would strip it, since it would appear to be an unused expression statement — and perhaps this will continue to make sense as the default handling in such tools?
@bathos
Possibly notable that minifiers today would strip it, since it would appear to be an unused expression statement
That shouldn't be the case. I just tested on UglifyJS and it properly preserves the whole directive prologue. It was never safe to strip directive prologue entries (post-ES5).
Yeah, it shouldn’t be. I’m surprised Uglify keeps unknown ones, though, since Uglify does a number of unsafe transformations by default.
Terser does strip them:
@michaelficarra
In the same way that you do not want to write your users' passwords to your HTTP logs, you also would not want to write stack traces from functions specifically marked as "sensitive" to your error aggregator.
I think it depends on the type of aggregator (e.g. storing on permanent storage vs storing in transient memory storage for near real-time inspection). In general it seems to me that the consumer of those stack traces is in a better position, at least in some cases (e.g. Node.js applications that have some level of trust on code they run), to determine what information from those sensitive functions they want to have access to.
Changing the Function.prototype.toString result of a function (from including [native code] to not including it) could also change the behaviour of your application given the popularity of inspecting its output today. A switch for that would very likely be unpopular among vendors.
This seems like a consequence of having those directives impact two different use cases. I opened an issue to discuss this topic specifically at https://github.com/tc39/proposal-function-implementation-hiding/issues/38. Currently those two use cases seem to me to be unnecessarily coupled together.
Tools like Sentry can still be used in the presence of these directives. In fact, they will most often be improved by allowing the user to better separate their code from library/shim code.
Filtering out parts of stack traces seems to already be supported by SEntry. I'm not sure they support differentiating between shims/library code and application code, but loggers or loggers' consuumers seem to be a more natural place to implement this type of filtering. It seems it would be difficult for library/shims code authors to determine in advance what insights into stack traces would be needed by operators of applications/services for their specific use cases.
@misterdjules It is true that there is a design trade-off here between the ability to write secure code and the ability to introspect arbitrary aspects of the running program from unprivileged code. Of course I see value in the latter, but it is important that we not lose the ability to write secure code on a platform like the web. We made this choice when designing and including private fields and closures. Unprivileged runtime introspection tools similarly have no access to that private state.
@michaelficarra what about if it provides a private API that can accessed only by engines? This way Node.js can expose that as a parameter allowing to be disabled, but you, as a user, wouldn't be able to disable it in a browser...
Would that be acceptable?
Just to be clear, we are not trying to disable the function implementation hiding from the Function.prototype.toString
, only the hiding of the stack
Engines don't need a private API; they can already access whatever they like - private fields, source text, stack frames all included. For example, try new Promise(r => setTimeout(r, 100)
in the browser repl, which will show you the internal state of the promise, something that's not synchronously available to the runtime.
Spoke to @ghermeto in-person about this. Just needed to re-hash How does this affect devtools or other methods of inspecting a function's implementation details? with him. To be extremely clear, this proposal does not restrict programs, processes, or APIs that remain external to the running program in any way. This means that engine monitoring hooks, the console, the web inspector API, etc will all be able to observe hidden stack frames. The only use cases that are affected by this proposal are observations made from unprivileged JavaScript code running alongside the code marked as "sensitive".
I'm not sure they support differentiating between shims/library code and application code
We do, but right now it's only based on the stacktrace frame's path, nothing else
@michaelficarra thanks! 👍
I got feedback that it would be important to have very clear definitions and examples of what constitute unprivileged javascript code and what would be considered privileged code.
cc/ @mmarchini
my first impression is that I very much do not support this proposal and see it mainly as a new vector for malware to hide itself and a debugging nightmare. there might be environments where this is beneficial but I hope that on the web browsers and other engines make it very easy to disable this outright, should this become generally available. the open observability of code on the web is a hallmark feature of the platform imo. the problem statement described in the readme does not seem like something that is the language's to fix.
@nektro from dev tools, or as a user, you would always be able to view any code you wanted - this is just about the ability to view it from javascript.
Can we give application owners the ability to disable the directives from any library that uses them?
If I own the application, I can always lint for libraries that have these directives and make sure they are never included, but would be nice if I can use these libraries and I make the conscious decision to disable it.
This would allow people to keep using tools like Sentry and other error tracking tools.