gpujs / gpu.js

GPU Accelerated JavaScript
https://gpu.rocks
MIT License
15.08k stars 650 forks source link

Precision of mathematical calculations #690

Open zlelik opened 3 years ago

zlelik commented 3 years ago

What is wrong?

A long loop with mathematical calculation gives the wrong results. Loop from 1 to 1024 works fine, but longer loop gives bigger error. Loop from 1 until 102400 gives error 0.06 and until 1024000 gives 30% error.

Where does it happen?

In the code below.

How do we replicate the issue?

Run the code below with 1024 and 1024000 in the for loops.

const gpu = new GPU();
const settings = {
    output: [1],
    tactic: 'precision'
};

const testFuncGPU = gpu.createKernel(function(a) {
  let sum = 0.0;
  let tmpRes = 0.0;
  for (let i = 1.0; i <= 1024; i++) {
    tmpRes = Math.sin(a * i) * Math.pow(a, 1.0/i);
    sum += tmpRes;
  }
  return sum;
}, settings);

function testFuncCPU(a) {
  let sum = 0.0;
  let tmpRes = 0.0;
  for (let i = 1.0; i <= 1024; i++) {
    tmpRes = Math.sin(a * i) * Math.pow(a, 1.0/i);
    sum += tmpRes;
  }
  return sum;
}

var startTime = (new Date()).getTime();
const gpuRes = testFuncGPU(2.4);
var gpuTime = (new Date()).getTime();
var results = "gpuRes: " + gpuRes + " in " + (gpuTime - startTime) + " ms";
console.log(results);

startTime = (new Date()).getTime();
const cpuRes = testFuncCPU(2.4);
var cpuTime = (new Date()).getTime();
results = "cpuRes: " + cpuRes + " in " + (cpuTime - startTime) + " ms";
console.log(results);

with 1024 GPU Result is 1.051210641860962 CPU Result is 1.051450324921995 the difference is around 0.0002

with 102400 GPU Result is 0.31567540764808655 СPU Result is 0.37849366446319904 the difference is around 0.06

with 1024000 GPU Result is 0.9102823138237 СPU Result is 1.1813192628963551 the results are completely difference

How important is this (1-5)?

5 Because of this problem I cannot use GPU.js at all for my tasks.

Expected behavior (i.e. solution)

The expectation is that CPU and GPU calculation will give absolutely the same results.

Other Comments

I tried to use settings tactic: 'precision', 'speed', etc., but it makes no differences. Also if I switch to CPU mode const gpu = new GPU({ mode: 'cpu' });

then there is no error.

harshkhandeparkar commented 3 years ago

This error is probably caused due to the massive angles. Try angle % (2 * Math.PI)

zlelik commented 3 years ago

This error is probably caused due to the massive angles. Try angle % (2 * Math.PI)

Looks like you are right. I deleted Math.sin and it became much better. Is there any way to fix it? or there is only one way - implement my own sin functions with precalculated table?

midnight-dev commented 3 years ago

Is the reliable precision of sin/cos/tan methods contingent on whether or not calculations are run on a GPU? Using a modulus is a good workaround for angles, but that won't be applicable to every case.

Of course, there will be precision errors in really high/low numbers past 53 bits due to the nature of JavaScript, but is the precision expected to diverge when running the same code in different modes? I don't have a use case that I'm worried about here; it's just curiosity.

zlelik commented 3 years ago

Is the reliable precision of sin/cos/tan methods contingent on whether or not calculations are run on a GPU? Using a modulus is a good workaround for angles, but that won't be applicable to every case.

Of course, there will be precision errors in really high/low numbers past 53 bits due to the nature of JavaScript, but is the precision expected to diverge when running the same code in different modes? I don't have a use case that I'm worried about here; it's just curiosity.

My use case is this: I need to calculate a mathematical series (meaning this https://en.wikipedia.org/wiki/Series_(mathematics)), which includes Math.sin and Math.pow functions. And usually, it is a sum of many elements million or so. I wanted to see if it will be faster with GPU javascript compare to normal CPU javascript. With normal CPU javascript there is no problem with precision if I compare it to professional mathematical systems like Wolfram Mathematica. maybe the difference is in 14th or 15th digit like in javascript I get 1.123456789012355 and in Wolfram Mathematica I get 1.123456789012343215. But with GPU difference is match bigger.

harshkhandeparkar commented 3 years ago

This error is probably caused due to the massive angles. Try angle % (2 * Math.PI)

Looks like you are right. I deleted Math.sin and it became much better. Is there any way to fix it? or there is only one way - implement my own sin functions with precalculated table?

If the library doesn't clamp the angles from -pi to pi internally (which it should), you can do it yourself to get much much better results. I have tested this and sin cos, no matter how many terms you use, larger angles will always be imprecise.

Ne4to777 commented 3 years ago

Have the similar problem while raytracing a box. JS is bad at division operations. Just try to play with these guys:

export type ToPrescision = (x: number, n: number)=> number;
export const floorToPrescision: ToPrescision = function floorToPrescision(x, n) {
    const factor = 10 ** n;
    return Math.floor(x * factor) / factor;
};
export const ceilToPrescision: ToPrescision = function ceilToPrescision(x, n) {
    const factor = 10 ** n;
    return Math.ceil(x * factor) / factor;
};

There is 5-6 factor value is enough for me to remove artifacts.

zlelik commented 3 years ago

angle % (2 * Math.PI)

I tried like this.

const testFuncGPU = gpu.createKernel(function(a) {
  let sum = 0.0;
  let tmpRes = 0.0;
  for (let i = 1.0; i <= 1024000; i++) {
    tmpRes = Math.sin((a * i) % (2 * Math.PI)) * Math.pow(a, 1.0/i);
    sum += tmpRes;
  }
  return sum;
}, settings);

but it did not help much :( These are the results. gpuRes: 3.258890151977539 cpuRes: 0.9418870869125511

zlelik commented 3 years ago

Have the similar problem while raytracing a box. JS is bad at division operations. Just try to play with these guys:

export type ToPrescision = (x: number, n: number)=> number;
export const floorToPrescision: ToPrescision = function floorToPrescision(x, n) {
    const factor = 10 ** n;
    return Math.floor(x * factor) / factor;
};
export const ceilToPrescision: ToPrescision = function ceilToPrescision(x, n) {
    const factor = 10 ** n;
    return Math.ceil(x * factor) / factor;
};

There is 5-6 factor value is enough for me to remove artifacts.

How can I apply it for Math.sin()?

Ne4to777 commented 3 years ago

Oh, yes, it is a sine problem using float. Math.sin(1.1): gpuRes: 0.8912073969841003 in 35 ms cpuRes: 0.8912073373794556 in 42 ms

I was hoping that the problem is division

ghost commented 3 years ago

Isn't this caused because of how PCs do multiplication? As far as I remember they swap bits or something