The following version of defun is almost 3x faster on non-variadic functions on my machine. The performance improvement is achieved by avoiding wrapping the function arguments into a vector on each call.
It works like this (considering the example from the README). The defun declaration
(defun accum-defun
([0 ret] ret)
([n ret] (recur (dec n) (+ n ret)))
([n] (recur n 0)))
produces the following:
(def accum-defun
(fn fn__17870
([G__17872] (fn__17870 G__17872 defun.core/placeholder))
([G__17872 G__17873]
(let [placeholder17871 defun.core/placeholder]
(match [G__17872 G__17873]
[n placeholder17871] (do (recur n 0))
[0 ret] (do ret)
[n ret] (do (recur (dec n) (+ n ret))))))))
The idea is that all the matching always happens in the body with the greatest arity. All the arguments are matched using the vector syntax of match. The macro uses a placeholder object to distinguish between signatures of different arities.
For the variadics nothing has changed - the macro generates the same code as before.
I would appreciate if you could check the benchmarks on your machine/java version.
The following version of
defun
is almost 3x faster on non-variadic functions on my machine. The performance improvement is achieved by avoiding wrapping the function arguments into a vector on each call.It works like this (considering the example from the README). The
defun
declarationproduces the following:
The idea is that all the matching always happens in the body with the greatest arity. All the arguments are matched using the vector syntax of
match
. The macro uses a placeholder object to distinguish between signatures of different arities.For the variadics nothing has changed - the macro generates the same code as before.
I would appreciate if you could check the benchmarks on your machine/java version.