Closed ghost closed 1 year ago
Direct support for computing light maps is not planned in the near future (many other higher prio features on the wish list...). What would be needed? Ideally the input would be the geometry image of the mesh to compute the light map for, i.e., positions and normals from rasterizing the mesh using its UVs). Then a "camera" can use the geometry image to generate rays appropriately, which is straight forward using the geometry image (ray.org
is the position from the map and ray.dir
sampled on the cos-weighted hemisphere around the normal).
duplicate of #318
So I would place the camera in front of the triangle, in respect to its normal and render that, right?
No. The application needs to render a geometry image of the mesh (easy with e.g. OpenGL, but rather difficult within OSPRay). The resulting position/normal texture is then provided to a new "light map" camera to OSPRay (which is to be implemented).
Ok I will try to make my way through it :) thanks a lot for your help
Hi,
I have a light simulation to achieve on a mesh, to know where best to put a light in different contexts.
Is it possible to extract the light information as 0.0-1.0 value and U/V coordinates, for selected triangulated geometries ? Like this shadow map but as structs for each triangle of the geometry.
Thanks