mitsuba-renderer / mitsuba3

Mitsuba 3: A Retargetable Forward and Inverse Renderer
https://www.mitsuba-renderer.org/
Other
2.04k stars 236 forks source link

[❔ other question] Custom Integrator #166

Closed DoeringChristian closed 2 years ago

DoeringChristian commented 2 years ago

Summary

I want to implement an Integrator, that inherits from mitsuba.Integrator but it does not seem to have have a constructor.

Description

When implementing an integrator using metropolis sampling in python i would need to inherit from mitsuba.Integrator. But when I implement the init function for a class inheriting from mitsuba.Integrator I get the error "No constructor defined" when calling super().init(props).

def Integrator(mitsuba.Integrator):
    def __init__(self, props=mi.Properties()):
        super().__init__(props)

Is there a way to inherit from mitsuba.Integrator in python?

Thanks for your Help

ziyi-zhang commented 2 years ago

Hi @DoeringChristian, I guess it's a Pybind11 trampoline issue. I have never inherited from the base integrator class, but those bindings & trampolines are defined in integrator_v.cpp.

If that does not work, you can also inherit from the SamplingIntegrator like all python integrators do. From there we can rewrite everything. This might be a better solution since there is plan to replace pybind11 with nano-bind in the near future .

joeylitalien commented 2 years ago

While trying to implement a naive path tracer in Python, I ran into the same constructor issue. Inheriting from SamplingIntegrator solves this issue, but I get another error later on. Here's the full code adapted from prb.py where the bits related to differentiability were removed as only the primal rendering is used:

from __future__ import annotations # Delayed parsing of type annotations

import drjit as dr
import mitsuba as mi

mi.set_variant("llvm_ad_rgb")

def mis_weight(pdf_a, pdf_b):
    """MIS with power heuristic."""

    a2 = dr.sqr(pdf_a)
    return dr.detach(dr.select(pdf_a > 0, a2 / dr.fma(pdf_b, pdf_b, a2), 0), True)

class MyPathIntegrator(mi.SamplingIntegrator):
    """Simple path tracer with MIS + NEE."""

    def __init__(self, props):
        super().__init__(props)
        self.max_depth = props.get('max_depth', 12)
        self.rr_depth = props.get('rr_depth', 5)

    def sample(self,
               scene: mi.Scene,
               sampler: mi.Sampler,
               ray: mi.Ray3f,
               medium: mi.Medium = None, 
               active: bool = True
    ) -> Tuple[mi.Spectrum,
               mi.Bool, mi.Spectrum]:

        # Standard BSDF evaluation context for path tracing
        bsdf_ctx = mi.BSDFContext()

        # --------------------- Configure loop state ----------------------

        # Copy input arguments to avoid mutating the caller's state
        ray = mi.Ray3f(ray)
        depth = mi.UInt32(0)      # Depth of current vertex
        L = mi.Spectrum(0)        # Radiance accumulator
        β = mi.Spectrum(1)        # Path throughput weight
        η = mi.Float(1)           # Index of refraction
        active = mi.Bool(active)  # Active SIMD lanes

        # Variables caching information from the previous bounce
        prev_si         = dr.zeros(mi.SurfaceInteraction3f)
        prev_bsdf_pdf   = mi.Float(1.0)
        prev_bsdf_delta = mi.Bool(True)

        # Record the following loop in its entirety
        loop = mi.Loop(name="Custom Path Tracer",
                       state=lambda: (sampler, ray, depth, L, β, η, active,
                                      prev_si, prev_bsdf_pdf, prev_bsdf_delta))

        # Specify the max. number of loop iterations (this can help avoid
        # costly synchronization when when wavefront-style loops are generated)
        loop.set_max_iterations(self.max_depth)

        while loop(active):
            # Compute a surface interaction with given ray
            si = scene.ray_intersect(ray,
                                     ray_flags=mi.RayFlags.All,
                                     coherent=dr.eq(depth, 0))

            # Get the BSDF, potentially computes texture-space differentials
            bsdf = si.bsdf(ray)

            # ---------------------- Direct emission ----------------------

            # Compute MIS weight for emitter sample from previous bounce
            ds = mi.DirectionSample3f(scene, si=si, ref=prev_si)

            mis = mis_weight(
                prev_bsdf_pdf,
                scene.pdf_emitter_direction(prev_si, ds, ~prev_bsdf_delta)
            )

            Le = β * mis * ds.emitter.eval(si)

            # ---------------------- Emitter sampling ----------------------

            # Should we continue tracing to reach one more vertex?
            active_next = (depth + 1 < self.max_depth) & si.is_valid()

            # Is emitter sampling even possible on the current vertex?
            active_em = active_next & mi.has_flag(bsdf.flags(), mi.BSDFFlags.Smooth)

            # If so, randomly sample an emitter
            ds, em_weight = scene.sample_emitter_direction(
                si, sampler.next_2d(), True, active_em)
            active_em &= dr.neq(ds.pdf, 0.0)

            # Evaluate BSDF * cos(theta) differentiably
            wo = si.to_local(ds.d)
            bsdf_value_em, bsdf_pdf_em = bsdf.eval_pdf(bsdf_ctx, si, wo, active_em)
            mis_em = dr.select(ds.delta, 1, mis_weight(ds.pdf, bsdf_pdf_em))
            Lr_dir = β * mis_em * bsdf_value_em * em_weight

            # ------------------ Detached BSDF sampling -------------------

            bsdf_sample, bsdf_weight = bsdf.sample(bsdf_ctx, si,
                                                   sampler.next_1d(),
                                                   sampler.next_2d(),
                                                   active_next)

            # ---- Update loop variables based on current interaction -----

            L = L + Le + Lr_dir
            ray = si.spawn_ray(si.to_world(bsdf_sample.wo))
            η *= bsdf_sample.eta
            β *= bsdf_weight

            # Information about the current vertex needed by the next iteration

            prev_si = dr.detach(si, True)
            prev_bsdf_pdf = bsdf_sample.pdf
            prev_bsdf_delta = mi.has_flag(bsdf_sample.sampled_type, mi.BSDFFlags.Delta)

            # -------------------- Stopping criterion ---------------------

            # Don't run another iteration if the throughput has reached zero
            β_max = dr.max(β)
            active_next &= dr.neq(β_max, 0)

            # Russian roulette stopping probability (must cancel out ior^2
            # to obtain unitless throughput, enforces a minimum probability)
            rr_prob = dr.minimum(β_max * η**2, .95)

            # Apply only further along the path since, this introduces variance
            rr_active = depth >= self.rr_depth
            β[rr_active] *= dr.rcp(rr_prob)
            rr_continue = sampler.next_1d() < rr_prob
            active_next &= ~rr_active | rr_continue

            depth[si.is_valid()] += 1
            active = active_next

        return (L, dr.neq(depth, 0), L)

# Register new integrator
mi.register_integrator("mypath", lambda props: MyPathIntegrator(props))

# Load Cornell box scene & update integrator to custom one
cbox = mi.cornell_box()
cbox['integrator']['type'] = "mypath"

# Render
scene = mi.load_dict(cbox)
img = mi.render(scene, spp=16)
mi.Bitmap(img).write("scene.exr")

The error message I get is:

Critical Dr.Jit compiler failure: jit_var(r1818313558): unknown variable!
[1]    13223 abort      python3 path.py

This looks rather trivial to fix, except I'm not sure how to run a debugger here. Is there any documentation or dev tips for debugging Dr.Jit?

Thanks!

DoeringChristian commented 2 years ago

You can set the log level of Drjit to debug using:

dr.set_log_level(dr.LogLevel.Debug)

This is not listed in the references of the Drjit documentation but under this section. The available log levels can be found here. I have tested your example and it does indeed not work because of the error you mentioned. I have managed to write a simpler example:


import mitsuba as mi
import drjit as dr
import matplotlib.pyplot as plt

mi.set_variant("cuda_ad_rgb")

def mis_weight(pdf_a, pdf_b):
    a2 = dr.sqr(pdf_a)
    return dr.detach(dr.select(pdf_a > 0, a2 / dr.fma(pdf_b, pdf_b, a2), 0), True)

class Simple(mi.SamplingIntegrator):
    def __init__(self, props=mi.Properties()):
        super().__init__(props)
        self.max_depth = props.get("max_depth")
        self.rr_depth = props.get("rr_depth")

    def sample(self, scene: mi.Scene, sampler: mi.Sampler, ray_: mi.RayDifferential3f, medium: mi.Medium = None, active: bool = True):
        bsdf_ctx = mi.BSDFContext()

        ray = mi.Ray3f(ray_)
        depth = mi.UInt32(0)
        f = mi.Spectrum(1.)
        L = mi.Spectrum(0.)

        prev_si = dr.zeros(mi.SurfaceInteraction3f)

        loop = mi.Loop(name="Path Tracing", state=lambda: (
            sampler, ray, depth, f, L, active, prev_si))

        loop.set_max_iterations(self.max_depth)

        while loop(active):
            si: mi.SurfaceInteraction3f = scene.ray_intersect(
                ray, ray_flags=mi.RayFlags.All, coherent=dr.eq(depth, 0))

            bsdf: mi.BSDF = si.bsdf(ray)

            # Direct emission

            ds = mi.DirectionSample3f(scene, si=si, ref=prev_si)

            Le = f * ds.emitter.eval(si)

            active_next = (depth + 1 < self.max_depth) & si.is_valid()

            # BSDF Sampling
            bsdf_smaple, bsdf_val = bsdf.sample(
                bsdf_ctx, si, sampler.next_1d(), sampler.next_2d(), active_next)

            # Update loop variables

            ray = si.spawn_ray(si.to_world(bsdf_smaple.wo))
            L = (L + Le)
            f *= bsdf_val

            prev_si = dr.detach(si, True)

            # Stopping criterion (russian roulettte)

            active_next &= dr.neq(dr.max(f), 0)

            rr_prop = dr.maximum(f.x, dr.maximum(f.y, f.z))
            rr_prop[depth < self.rr_depth] = 1.
            f *= dr.rcp(rr_prop)
            active_next &= (sampler.next_1d() < rr_prop)

            active = active_next
            depth += 1
        return (L, dr.neq(depth, 0), [])

mi.register_integrator("integrator", lambda props: Simple(props))

scene = mi.cornell_box()
scene['integrator']['type'] = 'integrator'
scene['integrator']['max_depth'] = 16
scene['integrator']['rr_depth'] = 2
scene['sensor']['sampler']['sample_count'] = 64
scene['sensor']['film']['width'] = 1024
scene['sensor']['film']['height'] = 1024
scene = mi.load_dict(scene)

img = mi.render(scene)

plt.imshow(img ** (1. / 2.2))
plt.axis("off")
plt.show()
joeylitalien commented 2 years ago

Thanks for the tip @DoeringChristian! It was indeed a trivial fix: the last item of the return tuple should be an AOV list, not a spectrum, so changing to an empty list works as expected. 🙂

DoeringChristian commented 2 years ago

My custom integrator is now working with the render function overwritten (I haven't ported the full render function code from C++). This wasn't that easy because not everything is documented yet for example, "drjit.set_log_level". Also some mitsuba types (mitsuba.Float, mitsuba.UInt32 etc.) do not yet appear in auto complete suggestions at least with my setup. This might be due to the stubs files not being complete. I think it would be great if there was an example for implementing each of the renderer's components in python since the ability to quickly and easily test ideas is one of the best features of mitsuba and a starting off point is very useful. Anyway, If anyone needs a incomplete python implementation of the SamplingIntegrator: this should work. Should I improve this code and maybe add it to the documentation?

Thanks for all your help. Should I close the issue or are there any further problems implementing Integrators?

Speierers commented 2 years ago

Thanks for reporting those issues @DoeringChristian. Indeed the Dr.Jit documentation is still missing, we are working hard on this as we know it is important for new users.

Regarding the auto complete issue, could you provide more information on the system you use (e.g. OS, IDE, python version, ...) so I can give this a try on my end?

Speierers commented 2 years ago

I think it would be great if there was an example for implementing each of the renderer's components in python ...

Feel free to open a thread on Discussions if you have good suggestion on how this should be done :)

DoeringChristian commented 2 years ago

Thanks for reporting those issues @DoeringChristian. Indeed the Dr.Jit documentation is still missing, we are working hard on this as we know it is important for new users.

Regarding the auto complete issue, could you provide more information on the system you use (e.g. OS, IDE, python version, ...) so I can give this a try on my end?

Ok It might be better to put this into a separate issue but I'm using neovim with python-language-server (python3.10.5) for the backend through neovim's builtin lsp but I also tried it in CodeOSS with jedi lsp (python3.9.9): Screenshot from 2022-08-15 10-46-02. Pycharm (python3.10) didn't work either:

Screenshot from 2022-08-15 11-01-17

Vector or Point types e.g. mi.Vector3f are completed.

Speierers commented 2 years ago

Ok, this should be fixed in the next release 824917