Roonil / NCS_Spectrum_GLava

GLSL-based audio visualiser from NCS videos on YouTube
22 stars 3 forks source link

Algorithm request in java #1

Closed dhng22 closed 11 months ago

dhng22 commented 11 months ago

Your work is awesome. I really love to recreate a similar version in java but i don't have experience in opengl, thus really hope that you could make a more detail step explaination on the movement algorithm of the particles if possible. Thanks in advance

Roonil commented 11 months ago

Greetings, thank you for appreciating my work! I'm planning to add comments in the shaders so that anybody who's not familiar with GLSL can modify and understand the code efficiently. But keep in mind, I'm not an expert in GLSL, I began just some time ago, so my code might not be perfect but as far as I could test it, it seems to give pretty good results. I'll quickly write a summary of how the particle tracking works though:

In main() function of 1.frag (the first pass of the shader), a grid of particles is initialised by using the mod() function, (number of particles has to be <= screen resolution of the viewport in pixels, also I assume that the width and height of the viewport is in the ratio 1:1, but aspect ratio can also be maintained to allow arbitrary viewport heights). At this point, you should get a square grid of particles, containing the number of particles you specify in the x and y axes respectively.

It then maintains a new vec3 particleCoords to keep track of the current coordinates. It is 3d to take into account the z-axis displacement, and initially the z value for the grid is 0. vec3 old is used to apply noise function to the field (although I think swizzling could work here just fine, I wasn't sure if it would use the original values or retain old ones). fbm3 function takes in the particle Coordinates(4D because the Perlin Noise function used is a 4D noise function as in the original), the respective maximum displacement and flow values. The particle Coordinates for each of the x, y and z are slightly different, because otherwise you'd get the exact same value for displacements in all three directions. fbm3 then calls octaveNoise() on the particleCoords and the flow, gets the value, multiplies it by the maximum displacement then returns the value. octaveNoise() finally calls the cnoise() function to apply the noise function to the particle coordinates and also multiplies it with the audio, so the noise function "reacts" to the audio. At this point, you should get new, displaced coordinates in the particleCoords variable. cnoise() function is a perlin noise 4d function, and you can find it online easily, you don't have to write it from scratch.

We now have blurSize (for anti-aliasing the circle's edges) and feather (that affects the shape of the circle). Also, we add audio reaction to the radius so the circle's size changes as well. min() function is applied on radius to keep the circle in bounds of the viewport.

We now calculate distance of the displaced particles' coordinates from the center of the circle (at the center of the screen) and store it in distanceVectorFromCenter. If this distance is <= radius, it means we have to apply the spherical field to the particle. We do this by multiplying the value of the radius with the unit vector that points in the direction of the particle coordinates from the circle's center, so now it forms a ring around the center with the desired radius. These coordinates are stored in newPos.

To apply feathering, we use smoothstep(x,minVal, maxVal) function that gives continuous values ranging from 0 to 1 when minVal<=x<=maxVal. It gives 0 when x< minVal and 1 when x> maxVal. We calculate the distance between the coordinates at the ring and the coordinates before the spherical field was applied. We store it in diff. This value should range from 0 (no displacement from particle position to ring position, that means the particle was on the ring beforehand) and radius (maximum displacement due to the spherical field, that means the particle was at the center of the sphere). We multiply diff with the smoothstep function that applies anti-aliasing effect, and also feathers the radius. clamp() is used to restrain the smoothstepped values between blurSize and blurSize+1 to prevent edges from getting cut off. For a simpler version without anti-aliasing, I had simply used diff=smoothstep(0.,featherradius,diff); and it applies the feather as well.

After calculations, the particle coordinates get updated with the correct coordinates after getting feathered, by adding in diff*unit vector from center of the screen.

At this point, all of the calculations have been done, now to store the values I've used image textures because they allow arbitrary reads and writes from and to any coordinates. As I understand, in fragment shaders, the main() function is called once per fragment (in our case, it is once per pixel) and the output "color" of the fragment can NOT be set for any other coordinate besides the current one. Using imageLoad() and imageStore() requires OpenGL>=4.2, but there are some extensions that can enable them as well. They may be GPU specific, I'm not really sure.

The looping function for particleSize starts, which is used so that every value around the center of the particle gets updated correctly. The more the particleSize, the more the spread of the particle would be, and more writes would be initiated to update position of those "pixels" around it. The distance value is used to arrange a grid of squares around the center of the particle, and smoothstep is applied so that we actually get circles instead of squares as our particles. finalCoords is initialised and it stores the coordinates of the pixel around the center of the particle, and we add centerCoords to it again and then divide by 2, because we're actually scaling down our rendered scene (the grid would actually span the entire viewport, so we scale it down). An offset is also added to finalCoords so that no values get mapped to x=0 and y=0, as that'd produce lines on those axes.

Finally, the particle positions actually get stored. imageStore() takes in the image texture, the final coordinates of the particle, and a vec4 consisting of 1s and particleSize (I used it because I didn't want to re-declare particleSize in the next stage). This means that in our image, wherever a value is stored that is not vec4(0), we have a particle at that point.

This approach alone, however, is not enough as there could be many particles that land on the same coordinates. Because every time we store the same value, any previous information about existing particles in that 'image' is not preserved. It'd then produce a plain-shaped circle and we won't be able to see any internal displacements.

We could query the existing information in the image using imageLoad(), that takes in the image and coordinates, and outputs the value stored at that coordinate. Then, we can simply add 1 to that value and call imageStore() to pass it on to the next stage as we did with particleSize. If implemented, however, it'd introduce texture flickering, because imageLoad() and imageStore() are not run sequentially, their execution can happen parallely across multiple fragments, therefore imageLoad() and imageStore() simultaneously would not guarantee that we're reading and updating the same values. To tackle this, atomicImageAdd is used on a separate depthImage texture. Atomic operations get completed for one run per fragment, therefore they're suitable to add to an existing value. imageAtomicAdd() function takes in an image texture, the coordinates to add to and the value to add. It automatically adds to already-existing value at that coordinate, and that value is also returned. We could just have used the depthImage texture to detect if a particle is present, but atomic operations only allow integer values to get stored, so I also included the original image texture for imageStore() to pass over particleSize as well. This completes 1.frag, the first pass.

In 2.frag, we query the image by applying imageLoad(), and if vec4(0) isn't stored there, that means we have a particle there. We check this in the if (img.r!=0) condition, as it is sufficient to calculate just one component for our case. Afterwards, we update our fragment ("color" of the output variable from this pass)'s alpha to 1, query the depth texture using imageAtomicExchange() to get the corresponding "number" of particles at that pixel, then apply the color accordingly. More particles means stronger additions of colors to give add blend effect. imageAtomicExchange() exchanges the value at the image texture with the value at the LHS of the expression, therefore we'd get depth=depthValue from image, and now 0 would be stored in the image texture at that location. 0 is necessary because we also have to clear our texture, it is similar to removing our footprints in snow backwards :), otherwise we'd get our viewport getting washed with particles and it'd fill up quickly. Similarly, we clear the values in the first image by storing vec4(0) using imageStore().

In the else block, I pass over the opacity value because in the next stage (post-processing), I wanted to avoid starry-shaped particles due to blurring them, so I use it to not specify anything redundant in the next pass. We can just specify vec4(0) there to indicate nothing is present there. The particle tracking comprises of the first two passes and is complete at this stage.

I hope this clarifies the techniques (though maybe inoptimal xD) that I used.

Roonil commented 11 months ago

Closing the issue for now