Closed yifanlu0227 closed 2 months ago
My guess is that the input textures are not oriented correctly. As noted in the documentation, nvdiffrast follows OpenGL's convention on texture orientation when setting the cube map contents, see here.
Oh! So it is not the images looking from the inside out, but following the given convention?
Hi! Thanks for your repo!
I am creating a cube map from my images and use
dr.texture(cubemap[None,...], ray_dir[None, ...], filter_mode='linear', boundary_mode='cube')
to sample it.I combine 6 images in order (Right, Left, Top, Bottom, Back, Front) to construct the cube map, which has shapes (6, 256, 256, 3). Then I create the camera rays directions following the OpenGL coordinate system (-Z for FRONT, +X for RIGHT, +Y for UP) to sample the cube map.
I found that this can be sampled to the correct face. For example, the
ray_dir
pointing to [0,0,-1] obtained pixel from FRONT map. But if I use a direction vector(-0.5,-1,0).normalize()
to sample, it actually gets the color that I think should be taken from(+0.5, -1, 0).normalize()
, that is, the pixel color on the right part of FRONT texture.it get results:
the third ray direction points to the front-right direction and should sample brighter pixel from FRONT texture map, but it is dark. What's wrong with my operation?