Open kno10 opened 9 months ago
I did some JMH benchmarks, naive (value range 0 to 1), with -Djafama.fastlog=true -Djafama.fastsqrt=true
:
Benchmark (size) Mode Cnt Score Error Units
JaFaMa.atan_commons 1000000 avgt 20 50,885 ± 0,786 ns/op
JaFaMa.atan_jafama 1000000 avgt 20 9,572 ± 0,133 ns/op
JaFaMa.atan_java 1000000 avgt 20 46,685 ± 0,344 ns/op
JaFaMa.atan_strict 1000000 avgt 20 46,682 ± 0,224 ns/op
JaFaMa.ceil_commons 1000000 avgt 20 6,641 ± 0,077 ns/op
JaFaMa.ceil_jafama 1000000 avgt 20 5,523 ± 0,022 ns/op
JaFaMa.ceil_java 1000000 avgt 20 4,384 ± 0,004 ns/op
JaFaMa.ceil_strict 1000000 avgt 20 4,478 ± 0,054 ns/op
JaFaMa.cos_commons 1000000 avgt 20 31,059 ± 0,188 ns/op
JaFaMa.cos_jafama 1000000 avgt 20 11,047 ± 0,217 ns/op
JaFaMa.cos_java 1000000 avgt 20 29,382 ± 0,079 ns/op
JaFaMa.cos_strict 1000000 avgt 20 42,947 ± 0,573 ns/op
JaFaMa.exp_commons 1000000 avgt 20 23,770 ± 0,158 ns/op
JaFaMa.exp_jafama 1000000 avgt 20 12,596 ± 0,042 ns/op
JaFaMa.exp_java 1000000 avgt 20 30,689 ± 0,242 ns/op
JaFaMa.exp_strict 1000000 avgt 20 24,657 ± 0,168 ns/op
JaFaMa.log1p_commons 1000000 avgt 20 142,297 ± 2,009 ns/op
JaFaMa.log1p_jafama 1000000 avgt 20 14,763 ± 0,097 ns/op
JaFaMa.log1p_java 1000000 avgt 20 45,126 ± 0,086 ns/op
JaFaMa.log1p_strict 1000000 avgt 20 45,091 ± 0,179 ns/op
JaFaMa.log_commons 1000000 avgt 20 43,293 ± 0,086 ns/op
JaFaMa.log_jafama 1000000 avgt 20 30,487 ± 0,343 ns/op
JaFaMa.log_java 1000000 avgt 20 31,480 ± 0,215 ns/op
JaFaMa.log_strict 1000000 avgt 20 39,382 ± 0,406 ns/op
JaFaMa.round_commons 1000000 avgt 20 5,042 ± 0,026 ns/op
JaFaMa.round_jafama 1000000 avgt 20 5,711 ± 0,064 ns/op
JaFaMa.round_java 1000000 avgt 20 5,334 ± 0,009 ns/op
JaFaMa.round_strict 1000000 avgt 20 5,347 ± 0,020 ns/op
JaFaMa.sin_commons 1000000 avgt 20 23,829 ± 0,147 ns/op
JaFaMa.sin_jafama 1000000 avgt 20 10,737 ± 0,077 ns/op
JaFaMa.sin_java 1000000 avgt 20 30,734 ± 0,091 ns/op
JaFaMa.sin_strict 1000000 avgt 20 36,735 ± 0,082 ns/op
JaFaMa.sqrt_jafama 1000000 avgt 20 3,304 ± 0,064 ns/op
JaFaMa.sqrt_java 1000000 avgt 20 3,282 ± 0,012 ns/op
JaFaMa.sqrt_strict 1000000 avgt 20 3,306 ± 0,060 ns/op
JaFaMa.tan_commons 1000000 avgt 20 54,395 ± 0,188 ns/op
JaFaMa.tan_jafama 1000000 avgt 20 10,073 ± 0,054 ns/op
JaFaMa.tan_java 1000000 avgt 20 40,770 ± 0,069 ns/op
JaFaMa.tan_strict 1000000 avgt 20 48,009 ± 0,120 ns/op
Seems as if JaFaMa still is beneficial, except for log, ceil, sqrt
Hello. Sorry for the late reply, I don't check this place often.
Good to see that it's still beneficial for most methods ;) log and sqrt often delegate to hardware so it's hard to beat that, and ceil/floor are simple enough to be naturally fast in the JDK.
One thing I observed got slower with JDK versions from 5 to 7 is the time it takes to initialize the tables (as I say in the readme), but I didn't check if that changed with newer JDKs.
I don't plan on keeping up-to-date benchmarks, unless I make a new version. in which case I might do some related updates.
I have a few non-committed upgrades for some "quick" methods (better constants), and could add some methods that were added to JDK Math, but I don't feel like that would be worth a new version (not releasing too often makes it easier for projects to harmonize their dependencies).
Though, I've been playing around with BigDecimal-like code for exploring some fractals, but mutable (for pooling, which can help a lot), based on binary rather than decimal (makes multiplications and divisions by powers of two much faster), and using parallelism (or not). I might add a "big" package to jafama with these "fast" "big" classes, plus a related FastBigMath class with various kinds of functions: that would be worth a new version.
It would be great to reevaluate on modern CPUs and JDKs which of these functions are still beneficial to use.