mitsuba-renderer / mitsuba3

Mitsuba 3: A Retargetable Forward and Inverse Renderer
https://www.mitsuba-renderer.org/
Other
2.1k stars 246 forks source link

Extend projective integrators to support arbitrary sensors #1163

Open dvicini opened 6 months ago

dvicini commented 6 months ago

Currently, the projective integrators explicitly disallow the use of any sensor that is not the standard perspective sensor: https://github.com/mitsuba-renderer/mitsuba3/blob/fda612b6cf3107fabb84dead1ca9c171a48e1314/src/python/python/ad/projective.py#L108

I wanted to use the batch sensor to cut down some Python overheads, but currently cannot due to this sensor Jacobian implementation being very much specialized to perspective sensors.

This isn't urgent, but it would be good to eventually support the batch sensor for projective sensors as well.

merlinND commented 6 months ago

Also discussed for orthographic in #1160.

ejtwe commented 6 months ago

How is the function perspective_sensor_jacobian() derived? The calculation of J_num/J_den*multiplier is confusing.

def perspective_sensor_jacobian(self,
                                    sensor: mi.Sensor,
                                    ss: mi.SilhouetteSample3f):
        """
        The silhouette sample `ss` stores (1) the sampling density in the scene
        space, and (2) the motion of the silhouette point in the scene space.
        This Jacobian corrects both quantities to the camera sample space.
        """
        if not sensor.__repr__().startswith('PerspectiveCamera'):
            raise Exception("Perspective cameras are supported")

        to_world = sensor.world_transform()
        near_clip = sensor.near_clip()
        sensor_center = to_world @ mi.Point3f(0)
        sensor_lookat_dir = to_world @ mi.Vector3f(0, 0, 1)
        x_fov = mi.traverse(sensor)["x_fov"][0]
        film = sensor.film()

        camera_to_sample = mi.perspective_projection(
            film.size(),
            film.crop_size(),
            film.crop_offset(),
            x_fov,
            near_clip,
            sensor.far_clip()
        )

        sample_to_camera = camera_to_sample.inverse()
        p_min = sample_to_camera @ mi.Point3f(0, 0, 0)
        multiplier = dr.sqr(near_clip) / dr.abs(p_min[0] * p_min[1] * 4.0)

        # Frame
        frame_t = dr.normalize(sensor_center - ss.p)
        frame_n = ss.n
        frame_s = dr.cross(frame_t, frame_n)

        J_num = dr.norm(dr.cross(frame_n, sensor_lookat_dir)) * \
                dr.norm(dr.cross(frame_s, sensor_lookat_dir)) * \
                dr.abs(dr.dot(frame_s, ss.silhouette_d))
        J_den = dr.sqr(dr.sqr(dr.dot(frame_t, sensor_lookat_dir))) * \
                dr.squared_norm(ss.p - sensor_center)

        return J_num / J_den * multiplier