Open fangchuan opened 3 years ago
(clicked wrong button too early...)
Typically OpenGL stores depth as the perpendicular distance to the image plane (scaled to be in nearclip to farclip), whereas t
from Embree is the Euclidian distance to the camera position. We had once some utility code in OSPRay to convert between those, have a look here: https://github.com/ospray/ospray/blob/release-1.1.x/modules/opengl/util.cpp#L104-L108
Also do NOT multiply with length(dir), thir essentially reverts the removal of normalize(dir) again.
313
Thank u for your advise, I tried, but it gives me same result as #313 . I have made some revise in code:
I get the value of ray.tfar and multiply the norm of direction vector, I thought the result represents depth_value, is that right? Then I apply a scale factor to the depth_value .
By the way, could you make a explanation about the ISPCCamera in code? https://github.com/embree/embree/blob/ae029e2ff83bebbbe8742c88aba5b0521aba1a23/tutorials/common/tutorial/camera.h#L68-L77 Why these transform is necessary? I can understand the camera2world () function, but I fail to get the point of the construct process of vx, vy and vz, espacially vz. I ask this question becasue the image rendered by embree isnot exactly same as the image rendered by OpenGL: It seems there camera poses are slightly different based on image, But the camera pose passed in is identical in fact!
@svenwoop @cbenthin @atafra @johguenther I'm looking forawrd for your help, thanks a lot!