dilevin / computer-graphics-shader-pipeline

Computer Graphics Assignment about the Shader Pipeline
2 stars 5 forks source link

Inputs in lit.fs #45

Open NPTP opened 4 years ago

NPTP commented 4 years ago

Hi, in lit.fs we have the following inputs:

// Inputs:
in vec3 sphere_fs_in;
in vec3 normal_fs_in;
in vec4 pos_fs_in; 
in vec4 view_pos_fs_in;

I'm having some trouble debugging issues visually here - are these inputs already transformed into projection space? Or are they still in model/view/other space? It's unclear if I'm making errors in my use of them, or errors in my assumptions about them.

Also, I understand pos_fs_in is the point in 3D space in question using homogenous coordinates and normal_fs_in is the normal at this point, while view_pos_fs_in the point of the camera "eye". What does spehre_fs_in represent in this context?

Thank you!

abhimadan commented 4 years ago

These are the values you output in snap_to_sphere.tes - take a look at the comments in that file to understand the coordinate spaces these values should be in.

NPTP commented 4 years ago

Thanks! This raises a new confusion though. We have these two outputs defined in snap_to_sphere.tes :

// projected, view, and model transformed 3D position
out vec4 pos_fs_in;

// view and model transformed 3D position
out vec4 view_pos_fs_in;

So this looks like view_pos_fs_in is the same point as pos_fs_in, only with one less transformation applied (the projection transformation), so both of these are point locations on the shapes themselves simply at different stages of transformation. Is this a correct interpretation, and if so, how do we get the location of the camera eye for lit.fs when calling blinn_phong which requires a direction to the eye from the point on the shape?

abhimadan commented 4 years ago

View space is defined such that the eye is at the origin.

NPTP commented 4 years ago

Thanks again! I've got things just about looking right now, but only from a certain angle. Let's call the transformation used to rotate the moon around "M". I've used the transpose of the inverse of M to get the right appearance of the moon's normals as it moves. However, although they appear correct from the original viewing angle, if I look at the moon + planet from the top or bottom and set the color to 0.5 + 05*n to see the normals, I can see the normals rotating across the surface of the moon incorrectly (the lighting does not match the planet's when using blinn phong and seems to rotate with the moon's motion itself). How are we supposed to transform the rotating moon's normals to get this to work correctly? I have been puzzling over this for hours thinking I solved it and no luck.

Another thing - should the normals/lighting change when we move our view around, or should they stay consistent with the view? The normals and lighting look correct (ie similar to the sample given under lit.fs on the assignment page) from the starting view when shaderpipeline is opened up, but they start to fall apart from other angles. I'm not sure if we're only concerned with this view space or it needs to be projected properly such that changing the viewing angle with the mouse will maintain the perceived position of the lighting while changing our view of it.

(Note I am using a directional light by simply specifying the light source as a changing direction and not a moving point in space)

abhimadan commented 4 years ago

You need to transform the normals to view space to get them to look right and not move around, and it sounds like you're just transforming from model space to world space.

How are you moving the view? The view uniform is constructed in C++ with a specific camera location and orientation, so if you move the view around in shaders without updating the C++ code, things won't look right. At any rate, the view matrix is meant to be static for this assignment so you don't need to worry about changing the camera position or orientation too much.

NPTP commented 4 years ago

Everything is being transformed from model space to view space, but not into world space, at least not inside the main() function of lit.fs.

The lighting looks perfect from the angle at which the compiled GL executable begins, but it's movement that made me suspicious. I'm not moving the view uniform itself, what I mean by "moving" is using the mouse to click and drag inside the GL window to get a different angle. The lighting doesn't reflect an appropriate angle relative to that kind of movement, but I think you've answered my question by explaining where the the camera location is defined and that it does not change. That probably means that moving around in the GL window with the mouse is independent of where the render "thinks" it is actually being seen from, and thus, the lighting does not look correct except when we are perfectly aligned with the view uniform in our GL window.

Please correct me if I'm wrong, but hopefully that's on the right track! EDIT: I realize how this sounds now. There is no transform into world space, so of course the lighting won't respond to a change in world space. I couldn't see how to/get it working to translate all of the lighting into world space, however.

And thank you for your help right into the night! It means a lot, truly appreciated.

abhimadan commented 4 years ago

Sorry for the late response - clicking and dragging actually modifies the view matrix (see main.cpp) - the normals are changing due to this change in the view matrix. Keep in mind though that model transforms coordinates from model to world space, and view transforms coordinates from world to view space, so to go from model to view space, you need to compose the transformations. If the moon looks incorrect, that's because you're just using view and not model. As a result, the normals aren't actually rotating along with the moon, so when you visualize the normals relative to the moon's motion, it looks like the normals are rotating in the opposite direction.

Again, apologies for the late response.