samskivert / jmustache

A Java implementation of the Mustache templating language.
Other
834 stars 128 forks source link

Cpu usage DefaultCollector.getMethodOn on execution template #104

Closed tinustate closed 9 months ago

tinustate commented 5 years ago

I've been giving visualvm a try and it seemed that getMethodOn method in the DefaultCollector class uses quite a lot cpu (relative to my other classes).

I extended BasicCollector and copied source code from DefaultCollector and added a few log lines to see what was going on. After a bit of fiddling and rethinking/rewriting a few troublesome mustache template the cpu usage went down.

In the createFetcher method i commented the last 2 checks to get even better performance (getField and getIfaceMethod).

The getMethodOn method seems to do magic with reflection by using the getDeclaredMethod to get a method from a class. This is fast when called once or twice, but when called multiple times this slows down, especially when your object extend objects that extend object that implement a interface: the execution process than tries to find fields on the object, the extended objects, and the interface. Using 5 variables in the template leads to 20 calls to getMethodOn. I hacked/solved this by adding all variables to the first object, but this is quite hacky :-)

Is there a way to "optimize" this method? Maybe with caching? (hacky solutions are welcome)

And thanks for the great library, especially the VariableFetcher cache which really speeds things up!

samskivert commented 5 years ago

Are you unnecessarily recreating Template objects? Or is your app somehow designed such that many templates have to be created? The process of creating variable fetchers is necessarily expensive because it uses reflection to determine how to obtain the desired property from an object, but it then caches a VariableFetcher so that subsequent lookups should be fast.

The expected use case is that you have a fixed number of templates and they are compiled and then used many times. The first time a template is used, it will be slow because the variable fetchers are resolved, but subsequent uses should reuse the fetchers and be orders of magnitude faster. Are you somehow recreating templates every time?

There are ways that getMethod could be made faster, but they make variable resolution less flexible, so I would be against enabling them by default. We could make the collector configurable so that if you knew that your objects behaved in a certain way, then you could take advantage of these optimizations. The main optimizations would be:

Those would likely bring some performance improvement, but again, this is all optimizing something that should not be a bottleneck at all if you're using things "properly". Unless your use case somehow unavoidably demands that you create new templates with high-frequency and cannot benefit from the variable fetcher caching that already takes place, this variable fetcher resolution cost should be balanced out by the fact that it's only done once per template, and then templates can be executed thousands or millions of times using the cached fetchers.

tinustate commented 5 years ago

Cheers for the helpful reply @samskivert ! I had to do some more testing to understand your tips. Using getMethod instead of getDeclaredMethod solves the problem of jmustache looking for methods on the extended classes and interfaces.

The variable fetcher cache indeed work as expected, i got confused because i looped through a list with 20 or so different classes that all extended the same base class and implemented the same interface. Many of these classes don't contain methods/properties which jmustache tries to find using the getMethodOn function and some values are manually added to the model by the classes. This resulted in many many calls to getMethodOn.

I appreciate your help, everything is a lot quicker and nicer to the cpu now.

agentgt commented 9 months ago

Given

I appreciate your help, everything is a lot quicker and nicer to the cpu now.

@samskivert I think we can close this issue now?