TomFrost / Jexl

Javascript Expression Language: Powerful context-based expression parser and evaluator
MIT License
573 stars 93 forks source link

Allow evaluating from precompiled AST object #72

Closed austinkelleher closed 4 years ago

austinkelleher commented 5 years ago

Problem

Currently, Jexl supports the ability to precompile by saving the Expression object in memory. Example:

const jexl = require('jexl')

const compiled = jexl.compile('name')
compiled.evalSync({ name: 'John' })

I am using AWS Lambda, so the processes are short lived. I precompile my Jexl code and store the private expression._ast object for later evaluation. My current hack is basically:

const jexl = require('jexl');

const ast = ...
const expression = '...'

const jexlExpression = new Expression(
  jexl._getLang(),
  expression
);

jexlExpression._ast = ast;
jexlExpression.evalSync({ ... });

Proposal

We should consider exposing a public API for evaluating from a provided AST object.

Here is a possible API:

Fetching AST

We could change Expression.prototype._getAst to Expression.prototype.getAst:

const ast = jexl.compile(...).getAst();

Evaluating AST

jexl.evalSyncFromAst(expression, ast, context)
jexl.evalFromAst(expression, ast, context)

Thoughts @TomFrost? I would be happy to work on this change if we can come up with an API.

austinkelleher commented 5 years ago

Any thoughts @TomFrost? I'm still interested in working on this improvement!

TomFrost commented 4 years ago

Hi Austin, sorry for the delayed response on this! Jumping back into Jexl after a job change.

You've given me a lot to think about for sure. The topic of caching the AST has come up before, but I did some sanity testing to see what the benefit of caching actually is. Running a simple timer of compiling a reasonably complex Jexl expression versus pulling the AST from popular data stores like Redis or DynamoDB or even a local file, local compilation of the expression was hilariously faster. All things considered, the compilation process of Jexl is surprisingly light. Even precompiling your expressions only begins to show performance gains if your code requires the same expression to be evaluated repeatedly against different contexts.

With that said: With the lambda use case, the AST could be hardcoded so you never have a compilation step, so you're no longer competing with an IO call. I totally get that. So, the question becomes: is it worth the API complexity for a minor optimization?

...and the answer to that might very well be a yes. But given that compilation happens so quickly, let me toss another option your way:

AWS reuses instances of your lambda when it doesn't spend too long being idle. And when that happens, it reuses the execution context. This means that you can cache data between runs just by assigning it to the process object! Consider this:

exports.handler = event => {
  if (!process._myExpr) {
    process._myExpr = jexl.compile(...)
  }
  process._myExpr.eval(...)
}

Now you get to avoid the ugly step of hardcoding the AST, the API stays simple, and you're still not recompiling the expression every run unless your lambda has a cold start. And if that's the case, the compilation of the Jexl expression is negligible compared to the cold start time.

What do you think?

austinkelleher commented 4 years ago

@TomFrost Thanks for the response! Hope you're enjoying your new job.

My use case is a bit different. Our users are defining Jexl templates and the templates later gets evaluated on the server (Lambda function) given some arbitrary input. We aren't able to cache every user's template in the Lambda runtime. I personally have not benchmarked Jexl, so I wasn't aware that in most cases on-the-fly compilation was faster!

I should try doing some benchmarks myself. Thanks for the feedback.

TomFrost commented 4 years ago

@austinkelleher I am, thanks!

I see I see-- thanks for the details! I definitely understand why it might make sense to store the compiled AST.

I'm looking at Expression.js now, and it would be fairly easy to write this in... but I'm hesitant because it would either mean writing in version upgrading for the AST, or exposing the app to unexpected errors from ASTs compiled by an earlier version of Jexl as the language changes. If you find the burden of compilation to be really high, I can look at either optimizing the lexing process (which I'm working on now ;-)) or lifting the AST to top level support.

As-is, though, I wouldn't recommend calling any other underscore-led functions directly, as those are subject to change in a non-major version bump.

I'm going to close this for now, but yell if your testing turns up a need to revisit this!

TomFrost commented 4 years ago

One more quick note! If you are using _ast and saving that somewhere, the next release WILL break your saved ASTs if any of them use transforms. I've rewritten Transform nodes to more generic FunctionCall nodes in order to implement #43 .