gpujs / gpu.js

GPU Accelerated JavaScript
https://gpu.rocks
MIT License
15.08k stars 650 forks source link

Accessing Textures within a 2D Array is impossible #685

Open andrewbrg opened 3 years ago

andrewbrg commented 3 years ago

What is wrong?

If we pass a parameter to a kernel which is a 2D array with each child array containing an image it is impossible to access the image's RGBA values from within a kernel.

So for example if this was a parameter to a kernel:

image

Then we would expect within the kernel to be able to do the following (where y,x corrispond to the kernel size):

parameter[2][13][y][x][0]; for the R value of that texture

or even simply return parameter[2][13][y][x];

However it is impossible to do.

Even simply trying the following will give an error:

const o = parameter[2];
const b = o[13];

The above should at the very least reference the image texture but instead the kernel throws an error.

You can see this in more detail by checking out the commented code here on line 294: https://github.com/andrewbrg/andrewbrg.github.io/blob/main/src/js/classes/kernels.js#L294

In case you wish to run this repo as a test the images are being loaded into the objs array from line 48 of this file: https://github.com/andrewbrg/andrewbrg.github.io/blob/main/src/js/classes/engine.js#L48

This is true for any index of the parameter (even simple numeric ones), for example:

const o = parameter[2];
const b = o[2];

Gives: Uncaught Error: Error compiling fragment shader: ERROR: 0:619: 'user_oSize' : undeclared identifier ERROR: 0:619: 'user_oDim' : undeclared identifier ERROR: 0:619: 'getMemoryOptimized32' : no matching overloaded function found ERROR: 0:619: 'user_oSize' : undeclared identifier ERROR: 0:619: 'user_oDim' : undeclared identifier ERROR: 0:619: 'getMemoryOptimized32' : no matching overloaded function found ERROR: 0:619: 'user_oSize' : undeclared identifier ERROR: 0:619: 'user_oDim' : undeclared identifier ERROR: 0:619: 'getMemoryOptimized32' : no matching overloaded function found

The only way to get anything to work is by referencing the pull path through the array for example:

const b = parameter[2][2];

Still the problem remains that we are unable to go more than 4 indexes deep, for example this would also fail:

const r = parameter[2][13][1][1][0];

Gives: Uncaught Error: Unexpected expression on line 153, position 17: objs[2][13][1][1][0]

Which as far as I can tell leaves no options to access the image textures.

Where does it happen?

In every Kernel and combination of parameters with images in arrays that was tested.

How do we replicate the issue?

Try and create an array of images, pass that array as a parameter to a kernel and see if you can render an image from it's RGBA values.

How important is this (1-5)?

I would say a 4 on this one.

Expected behavior (i.e. solution)

The solution would be some way to pass a number of indexed textures to a kernel and be able to retreive them from within the kernel.

Other Comments

andrewbrg commented 3 years ago

Any help please? Do you think this ill be addressed?

FurryR commented 4 days ago

Minimum reproducible code example:

const GPU = await import('https://cdn.jsdelivr.net/npm/gpu.js@2.16.0/+esm')
const ctx = new GPU.default.GPU()
ctx.createKernel(function (a) {return a[1-1];}).setOutput([1])([1])

Caused by

In compiled code...

float getMemoryOptimized32(sampler2D tex, ivec2 texSize, ivec3 texDim, int z, int y, int x) {
  // ...
}

getMemoryOptimized32(user_GPUu_uargument_0, user_GPUu_uargument_0Size, user_GPUu_uargument_0Dim, 0, 0, /* unmatched float, expected int */ (1.0-1.0));

Workaround

Use:

ctx.createKernel(function (a) {return a[int(1-1)];}).setDebug(true).setOutput([1])([1]); // Only works on GPU mode
ctx.createKernel(function (a) {return a[(1-1)%Infinity];}).setDebug(true).setOutput([1])([1]); // Works on both CPU and GPU mode
ctx.createKernel(function (a) {return a[(1-1)%(1/0)];}).setDebug(true).setOutput([1])([1]); // Same as #2, may cause undefined result on some legacy devices (usually fine because 1/0 produces Infinity on most devices)

How?

By replacing (1.0-1.0) with int(integerCorrectionModulo(float((1.0-1.0)), /* Infinity */ intBitsToFloat(2139095039))) we can get a integer with some little overheads and keep compatibility with Javascript.

Hope that helps.