ythy / blog

Give everything a shot
6 stars 0 forks source link

WebGL #103

Open ythy opened 6 years ago

ythy commented 6 years ago

reference

WebGL era

WebGL is based on OpenGL Embedded System (ES), which is a low-level procedural API for accessing 3D hardware. WebGL APIs get almost direct access to the underlying OpenGL hardware driver, without the penalty of code being translated first through the browser support libraries and then the OS's 3D API libraries.

Hands-on WebGL

Two seemingly identical blue triangles appear on the page. However, not all triangles are created equal. Both triangles are drawn with the HTML5 canvas. But the one on the left is 2D and is drawn in fewer than 10 lines of JavaScript code. The one on the right is a four-sided 3D pyramid object that takes more than 100 lines of JavaScript WebGL code to render.

WebGL draws 2D views

You see a triangle on the right side because of the orientation of the pyramid. You're looking at one blue side of a multicolored pyramid - analogous to looking directly at one side of a building and seeing only a 2D rectangle. This realization reinforces the essence of working with 3D graphics in the browser: The final output is always a 2D view of a 3D scene. Therefore, any static rendering of a 3D scene by WebGL is a 2D image.

Obtaining a 3D WebGL drawing context

function draw3D()  {
      var canvas = document.getElementById("shapecanvas2");
      var glCtx = null;
      glCtx = canvas.getContext("experimental-webgl");
}

Setting the viewport

To tell WebGL where the rendered output should go, you must set the viewport by specifying, in pixels, the area within the canvas that WebGL can draw to

// set viewport
 glCtx.viewport(0, 0, canvas.width, canvas.height);

0,0 即 x, y 是以左下角为原点的

Describing 3D objects

You must start creating data to feed to the WebGL rendering pipeline. This data must describe the 3D objects that comprise the scene. To describe a 3D object for WebGL rendering, you must represent the object by using triangles. WebGL can take the description in the form of a set of discrete triangles, or as a strip of triangles with shared vertices. In the pyramid example, the four-sided pyramid is described in a set of four distinct triangles. Each triangle is specified by its three vertices.

The side's three vertices are (0,1,0) on the y-axis, (0,0,1) on the z-axis, and (1,0,0) on the x-axis. Vertex array describing the set of triangles that makes up the pyramid

// Vertex Data
vertBuffer = glCtx.createBuffer();
glCtx.bindBuffer(glCtx.ARRAY_BUFFER, vertBuffer);
var verts = [
0.0, 1.0, 0.0,
-1.0, 0.0, 0.0,
0.0, 0.0, 1.0,

0.0, 1.0, 0.0,
0.0, 0.0, 1.0,
1.0, 0.0, 0.0,

0.0, 1.0, 0.0,
1.0, 0.0, 0.0,
0.0, 0.0, -1.0,

0.0, 1.0, 0.0,
0.0, 0.0, -1.0,
-1.0, 0.0, 0.0

];
glCtx.bufferData(glCtx.ARRAY_BUFFER, new Float32Array(verts), 
   glCtx.STATIC_DRAW);

Note that the bottom of the pyramid (which is actually a square on the x-z plane) is not included in the verts array. Because the pyramid is rotated around the y-axis, the viewer can never see the bottom. The result is a vertBuffer variable that references a hardware-level buffer that contains the required vertex information. The data in this buffer can be directly and efficiently accessed by other processors in the WebGL rendering pipeline.

Specifying the colors of the pyramid's sides

colorBuffer = glCtx.createBuffer();
glCtx.bindBuffer(glCtx.ARRAY_BUFFER, colorBuffer);
var faceColors = [
            [0.0, 0.0, 1.0, 1.0], // front  (blue)
            [1.0, 1.0, 0.0, 1.0], // right  (yellow)
            [0.0, 1.0, 0.0, 1.0], // back   (green)
            [1.0, 0.0, 0.0, 1.0], // left   (red)
];
var vertColors = [];
faceColors.forEach(function(color) {
    [0,1,2].forEach(function () {
       vertColors = vertColors.concat(color);
     });
    });        
glCtx.bufferData(glCtx.ARRAY_BUFFER,
   new Float32Array(vertColors), glCtx.STATIC_DRAW);

But WebGL has no notion of the "side" of a pyramid. Instead, it works only with triangles and vertices. The color data must be associated with a vertex. In Listing 5, an intermediate JavaScript array named faceColors initializes the vertColors array. vertColors is the JavaScript array used in loading the low-level colorBuffer. The faceColors array contains four colors — blue, yellow, green, and red — corresponding to the four sides. These colors are specified in red, green, blue, alpha (RGBA) format.

The vertColors array contains a color for each vertex of every triangle, in the order that corresponds to their appearance in the vertBuffer. Because each of the four triangles has three vertices, the final vertColors array contains a total of 12 color entries (each of which is an array of four float numbers). A nested forEach loop is used to assign the same color to each of the three vertices of each triangle that represents a side of the pyramid.

Understanding OpenGL shaders

A question that might naturally come to mind is how specifying a color for the three vertices of a triangle renders the entire triangle in that color. To answer this question, you must understand the operation of two programmable components in the WebGL rendering pipeline: the vertex shader and the fragment (pixel) shader. These shaders can be compiled into code that can be executed on the 3D acceleration hardware GPU. Some modern 3D hardware can execute hundreds of shader operations in parallel for high-performance rendering. A vertex shader executes for each specified vertex. The shader takes input such as color, location, texture, and other information associated with a vertex. Then the shader computes and transforms the data to determine the 2D location on the viewport where that vertex should be rendered, as well as the vertex's color and other attributes. A fragment shader determines the color and other attributes of each pixel that comprises the triangle between the vertices. You program both the vertex shader and the fragment shader with OpenGL Shading Language (GLSL) via WebGL.

var vertShaderCode = document.getElementById("vertshader").textContent;
var fragShaderCode = document.getElementById("fragshader").textContent;

var fragShader = glCtx.createShader(glCtx.FRAGMENT_SHADER);
glCtx.shaderSource(fragShader, fragShaderCode);
glCtx.compileShader(fragShader);

var vertShader = glCtx.createShader(glCtx.VERTEX_SHADER);
glCtx.shaderSource(vertShader, vertShaderCode);
glCtx.compileShader(vertShader);

// link the compiled vertex and fragment shaders 
shaderProg = glCtx.createProgram();
glCtx.attachShader(shaderProg, vertShader);
glCtx.attachShader(shaderProg, fragShader);
glCtx.linkProgram(shaderProg);

Vertex and fragment shader GLSL code

The vertex shader operates on input data buffers that are prepared earlier in the JavaScript vertBuffer and colorBuffer variables.

attribute vec3 vertPos;
attribute vec4 vertColor;
uniform mat4 mvMatrix;
uniform mat4 pjMatrix;
varying lowp vec4 vColor;
void main(void) {
  gl_Position = pjMatrix * mvMatrix * vec4(vertPos, 1.0);
  vColor = vertColor;
}

The attribute keyword: In this case, vertPos contains a vertex position from the vertBuffer each time the shader is executed. And vertColor contains the color of that vertex as specified in the colorBuffer you set up earlier. The uniform keyword: They can be changed only by the CPU and never by the rendering GPU. The vcolor variable has a varying storage qualifier. varying indicates that this variable is used to interface between the vertex and the fragment shader.

varying lowp vec4 vColor;
 void main(void) {
   gl_FragColor = vColor;
 }

The fragment shader is trivial. It takes the interpolated vColor value from the vertex shader and uses it as the output.

Model view and projection matrix

The model view matrix combines the transformation of the model (the pyramid in this case) and the view (the "camera" through which you view the scene). Basically, the model view matrix controls where to place the objects in the scene and the viewing camera. This code sets up the model view matrix, in the example, placing the pyramid three units away from the camera:

modelViewMatrix = mat4.create();
mat4.translate(modelViewMatrix, modelViewMatrix, [0, 0, -3]);

You know from vertBuffer setup that the pyramid is two units wide, so the preceding code enables the pyramid to "fill the frame" of the viewport.

The projection matrix controls the transformation of the 3D scene through the camera's view onto the 2D viewport. The projection matrix setup code from the example is:

projectionMatrix = mat4.create();
mat4.perspective(projectionMatrix, Math.PI / 4, canvas.width / canvas.height, 1, 100);

The camera is set to have a Math.PI / 4 (pi radians divided by 4, or 180 degrees / 4 = 45-degree) field of view. The camera can see things as close as 1 unit away and as far as 100 units away while maintaining a perspective (undistorted) view, width/height ratio that matches the display size of the canvas

Rendering the 3D scene in the viewport

function draw(ctx) {
  ctx.clearColor(1.0, 1.0, 1.0, 1.0);
  ctx.enable(ctx.DEPTH_TEST);
  ctx.clear(ctx.COLOR_BUFFER_BIT  | ctx.DEPTH_BUFFER_BIT);
  ctx.useProgram(shaderProg);
  ctx.bindBuffer(ctx.ARRAY_BUFFER, vertBuffer);
  ctx.vertexAttribPointer(shaderVertexPositionAttribute, 3 ,
     ctx.FLOAT, false, 0, 0);
  ctx.bindBuffer(ctx.ARRAY_BUFFER, colorBuffer);
  ctx.vertexAttribPointer(shaderVertexColorAttribute, 4 ,
     ctx.FLOAT, false, 0, 0);
  ctx.uniformMatrix4fv(shaderProjectionMatrixUniform, 
      false, projectionMatrix);
  mat4.rotate(modelViewMatrix, modelViewMatrix, 
      Math.PI/4, rotationAxis);
  ctx.uniformMatrix4fv(shaderModelViewMatrixUniform, false, 
      modelViewMatrix);
  ctx.drawArrays(ctx.TRIANGLES, 0, 12 /* num of vertex */);
}

The shaderProg, consisting of the vertShader and fragShader you compiled and linked earlier is loaded for GPU execution by the ctx.useProgram() call. Next, the low-level data buffers (vertBuffer and colorBuffer) you set up earlier in JavaScript are now bound to the attributes of the GLSL shader program through a series of ctx.bindBuffer() and ctx.vertexAttribPointer() calls.(This workflow is similar in concept to stored procedures in SQL programming, whereby parameters are bound at runtime and the prepared statement can be re-executed.) The ctx.uniformMatrix4fv() calls sets up the model view and the projection matrix for read-only access by the vertex shader

Last but not least, the ctx.drawArrays() call renders the set of four triangles — a total of 12 vertices — to the viewport.

ythy commented 6 years ago

理解:

projectionMatrix

mat4.perspective(projectionMatrix,
      fieldOfView,
      aspect,
      zNear,
      zFar);

controls the transformation of the 3D scene through the camera's view onto the 2D viewport.

modelViewMatrix

combines the transformation of the model (the pyramid in this case) and the view (the "camera" through which you view the scene).

enableVertexAttribArray

In WebGL, values that apply to a specific vertex are stored in attributes. These are only available to the JavaScript code and the vertex shader. Attributes are referenced by an index number into the list of attributes maintained by the GPU。 Either way, since attributes cannot be used unless enabled, and are disabled by default, you need to call enableVertexAttribArray() to enable individual attributes so that they can be used。

Parameters: index Specifying the index number that uniquely identifies the vertex attribute to enable. If you know the name of the attribute but not its index, you can get the index by calling getAttribLocation().

uniformMatrix[234]fv()

Specify matrix values for uniform variables.

vertexAttribPointer

binds the buffer currently bound to gl.ARRAY_BUFFER to a generic vertex attribute of the current vertex buffer object and specifies its layout. buffer和shader建立关系

gl.vertexAttribPointer(index, size, type, normalized, stride, offset);

size: Specifying the number of components per vertex attribute. Must be 1, 2, 3, or 4. 即每个顶点的坐标数据读取buffer中数组的前几个。三角形绘制的话 size=3 一个顶点取前9个数据,类推。