ufo5260987423 / various-program-languages-benchmark

4 stars 3 forks source link

Some notes about sumfp #1

Closed rigille closed 1 month ago

rigille commented 2 months ago

The results of the sumfp benchmark are pretty unintuitive. Chez does tail-call optimization while Node and Bun don't. Why then does Chez take longer to run? It's all startup time.

src/sumfp/sumfp.scm

;;; SUMFP -- Compute sum of integers from 0 to n using floating point

(import (chezscheme))

(define (run n)
  (let loop ((i n) (sum 0.))
    (if (< i 0.)
        sum
        (loop (- i 1.) (+ i sum)))))
(define start-time (current-time))
(run 8000000)
(define end-time (current-time))
(define elapsed-time (time-difference end-time start-time))
(printf
  "Elapsed time: ~a seconds\n" 
  (+ (time-second elapsed-time)
     (/ (time-nanosecond elapsed-time) 1e9)))

Output:

Elapsed time: 1.50236e-4 seconds

src/sumfp/sumfp.js

var run =function(n){
    var loop =function (i, sum){
        if (i<0.0)
            return sum;
        return loop(i-1, i+ sum);
    };
    return loop(n, 0.0);
}

const startTime = performance.now();
run(8000);
const endTime = performance.now();

const elapsedTime = endTime - startTime;
console.log(`Elapsed time: ${(elapsedTime/1000).toFixed(5)} seconds`);

Output:

Elapsed time: 0.00125 seconds

Startup time doesn't matter much if the program takes a few seconds to run, but if it takes milliseconds then it can make a big difference.

ufo5260987423 commented 2 months ago

Thank you for solving my puzzle.

I'll make your work a new benchmark into this project. So more people will know the truth.

ufo5260987423 commented 2 months ago

Now,my question is how bun run such fast??? Command being timed: "scheme --optimize-level 3 --script ./src/sumfp/sumfp-ignore-setuptime.scm" User time (seconds): 1.29 Command being timed: "node ./src/sumfp/sumfp-ignore-setuptime.js" User time (seconds): 1.42 Command being timed: "bun run ./src/sumfp/sumfp-ignore-setuptime.js" User time (seconds): 0.35

rigille commented 1 month ago

Looks like bun does have tail-call optimization. Also one thing that certainly can mess up performance in floating point operations is if they are not locally unboxed or have runtime type-checks. But we can only know that having a look at the machine code that is generated https://www.onsclom.net/posts/javascript-tco