Closed eacousineau closed 5 years ago
Results from running:
Matlab R2013a on GLNXA64
Matlab 8.1.0.604 (R2013a) / Java 1.6.0_17 on GLNXA64 Linux 3.11.***
Machine: Core i7-3740QM CPU @ 2.70GHz, 7803 GB RAM (#35~precise1-Ubuntu SMP Fri May 2 21:32:55 UTC 2014)
nIters = 100000
Operation Time (msec)
nop() function: 0.18
nop() subfunction: 0.18
@()[] anonymous function: 1.09
nop(obj) method: 6.71
nop() private fcn on @class: 0.18
classdef nop(obj): 9.59
classdef obj.nop(): 14.70
classdef private_nop(obj): 9.40
classdef class.static_nop(): 17.93
classdef constant: 7.11
classdef property: 2.36
classdef property with getter: 29.79
+pkg.nop() function: 13.20
+pkg.nop() from inside +pkg: 13.25
feval('nop'): 4.09
feval(@nop): 0.26
eval('nop'): 50.98
MEX mexnop(): 1.26
builtin j(): 0.02
isempty(persistent): 0.00
struct s.foo field access: 0.19
struct s.foo.bar field access: 0.22
struct() init: 10.62
struct.field init: 12.75
arg multi in / out x 4: 0.24
arg multi in / out x 8: 0.30
arg vararg x 4: 7.33
arg vararg x 8: 9.43
arg vararg cell x 4: 11.04
arg struct x 1: 8.16
arg struct x 4: 8.17
arg struct mod: 8.78
arg struct mod ref: 12.12
Hi Eric,
Thanks for the input, but I think I'm going to decline this pull request. There's a few different things going on here.
The Java thing was breaking because it was compiled with 1.8. I recompiled it with 1.6 (https://github.com/apjanke/matlab-bench/commit/a87f8738546c92c727e120aef61612c4d337a482) and it should be compatible with older Matlab versions now. If we need to go back further, I can pull down an older JDK and build with that.
The timings are in microseconds, not milliseconds, so mu (µ) is the standard metric prefix, and "msec" would be incorrect. Is the mu causing a display issue?
I'd also prefer to keep the focus of the main benchmark on just the function call itself, and not the cost of passing arguments. That opens up a pretty big space for exploration. It's worth looking at, but probably makes more sense in a separate benchmark function. Especially because it's a multidimensional space: you've got arg counts, types, and sizes all interacting, and probably with both fixed and marginal cost components. Seems amenable to a tabular presentation or similar where you could do a lot more sample points (e.g. all arg counts from 1 to 100) and present a denser output than the simplistic "name: time" used in the current benchmark.
This updates the benchmark to (a) add additional studies on arguments, and (b) add a
useJava
option (as a quick fix for Issue #1).