ssloy / tinyrenderer

A brief computer graphics / rendering course
https://github.com/ssloy/tinyrenderer/wiki
Other
20.73k stars 1.99k forks source link

Problem about Normal Transformation(Lesson 5 and 6) #106

Closed XiaofanLinUS closed 2 years ago

XiaofanLinUS commented 2 years ago

I am implementing normal mapping from texture. I am encountering some issues regarding transforming the normal into another space.

struct Shader : public IShader {
    mat<2,3,float> varying_uv;  // same as above
    mat<4,4,float> uniform_M;   //  Projection*ModelView
    mat<4,4,float> uniform_MIT; // (Projection*ModelView).invert_transpose()

    virtual Vec4f vertex(int iface, int nthvert) {
        varying_uv.set_col(nthvert, model->uv(iface, nthvert));
        Vec4f gl_Vertex = embed<4>(model->vert(iface, nthvert)); // read the vertex from .obj file
        return Viewport*Projection*ModelView*gl_Vertex; // transform it to screen coordinates
   }

    virtual bool fragment(Vec3f bar, TGAColor &color) {
        Vec2f uv = varying_uv*bar;                 // interpolate uv for the current pixel
        Vec3f n = proj<3>(uniform_MIT*embed<4>(model->normal(uv))).normalize();
        Vec3f l = proj<3>(uniform_M  *embed<4>(light_dir        )).normalize();
        float intensity = std::max(0.f, n*l);
        color = model->diffuse(uv)*intensity;      // well duh
        return false;                              // no, we do not discard this pixel
    }
};

    Shader shader;
    shader.uniform_M   =  Projection*ModelView;
    shader.uniform_MIT = (Projection*ModelView).invert_transpose();
    for (int i=0; i<model->nfaces(); i++) {
        Vec4f screen_coords[3];
        for (int j=0; j<3; j++) {
            screen_coords[j] = shader.vertex(i, j);
        }
        triangle(screen_coords, shader, image, zbuffer);
    }

This is the provided code for calculating intensity per pixel level on normal texture.

I inspect the source code for function "embed<4>". I realize the default fill value is 1 but shouldn't the vector instead of point be filled with 0 instead, both for normal and the light_dir 3-dim-vector? I didn't use provided lib for matrix operations since I am doing the tutorial on javascript so I am confusing how to implement the embed function. Thanks.

ssloy commented 2 years ago

Good finding, should be zero at the fourth homogeneous coordinate.

XiaofanLinUS commented 2 years ago

Thanks. That solves my concerns.

Another thing though. Is it necessary to perform the matrix transformation? Since both normal and light_dir are technically in world space now where the model matrix is identity. I don't see the need to do such a matrix transformation. (But in my implementation, the intensity is a bit dim after matrix transformation. Maybe I have some other bugs somewhere.) I have referred to other resources too. Some only perform the normal transformation under the circumstance that the model matrix is not an identity matrix without combining the matrix with perspective projection.

And one more question. Does proj<3> function just drop the w coordinate to convert from vec4 to vec3? Just to confirm the implementation. That's pretty much it. Thanks!

ssloy commented 2 years ago

Yes, in my implementation proj<3> simply drops the 4th coordinate. You do not always have to transform normals/lights to the normalized device coordinates (e.g. for static scenes), but sometimes you don't have choice. Choose whatever space is simpler in your case.

XiaofanLinUS commented 2 years ago

Ok, my last attempt to perform a transformation on normal according to your codes failed. Do you have any idea on how to solve it? I can continue but it still bugs me.

download (7)

ssloy commented 2 years ago

Wow, that is a nice bug. From visual inspection alone I have no idea...

XiaofanLinUS commented 2 years ago

It turns out I use light_dir as a global variable without noticing it. The bug is fixed now.