Closed totherik closed 11 years ago
impressive! let me talk to the group, seems like a good plan, were you able to test this on Rhino? we use it heavily on rhino
Ahh, good catch. I have not tested on Rhino.
and on V8 too, but we can test it out on V8 if you cannot
It looks like server.js
may be unique to Node.js environments. I'm not terribly familiar with Rhino, but it looks like the check here will use the CommonJS API, if available (Node.js and Rhino 1.7R3+) and the next check for process
will isolate to Node.js before requiring server.js
. Additionally, the server.js
file itself depends on the path
and vm
modules, which, AFAIK are specific to Node.js.
It could be that Rhino and vanilla v8 just use the default loadSource
which gets the advantage of the blazing fast eval
implementation. I stuck with reusing a known context in the interest of safety, even though it's slower than eval.
ah, makes sense, server.js is used only in the node.js case.
PR merged
While developing a Node.js app that relies heavily on dust.loadSource, I noticed the process memory usage was growing unbounded and thrashing like crazy. I traced it back to loadSource in server.js creating a new VM context on every invocation. Unsurprisingly, this is causing issues.
Test Case
I put together a test case of 3 variations: Original code, Caching and reusing a VM context, and vanilla eval. The tests was simple; it invoked dust.loadSource 100,000 times with a compiled template string, taking memory and timing snapshots every 1000 iterations.
Hardware
The test was run on a 2.5 GHz Intel Core i7 MBP with 8GB of RAM.
Results (100,000 iterations)
Performance
eval
Memory Usage
Recommendation
While
eval
is blazingly fast, it seems creating and caching a context might be more safe while still gaining an 86% performance increase.PR to follow.