mitsuba-renderer / drjit

Dr.Jit — A Just-In-Time-Compiler for Differentiable Rendering
BSD 3-Clause "New" or "Revised" License
563 stars 40 forks source link

Is it possible to extend a UInt with additional values ? #207

Open viswaravi opened 9 months ago

viswaravi commented 9 months ago

I am trying to gather multiple values from a TensorXf type inside custom BSDF Plugin. To create dynamic number of Index values I only have the starting Index in UInt type. Can I add additional numbers/index to my UInt type.

If I query them seperately as Float, I can't be able to combine them into a single TensorXf

Eval function of Custom BSDF Plugin which queries values from TensorXf

def __init__(self, props: mi.Properties) -> None:
        self.n = 6

def eval(self, si, active):
        x, y = si.uv.x , si.uv.y
        x = x * (self.resolution - 1)
        y = y * (self.resolution - 1)
        start_index = (y*128 + x) * self.n
        index_value = dr.gather(Float, self.TensorArray.array,start_index)
        result_tensor = TensorXf(index_value, shape=dr.shape(index_value))
njroussel commented 9 months ago

Hi @viswaravi

You're trying to change the vectorization width mid-way through your computation. Let's say you're rendering with 32 rays, once you reach the BSDF eval, your si will have dr.width(si)== 32. Now inside your eval method you're trying to build some index which doesn't have width 32. This requires a change in kernel width. What you most likely want is for every thread to handle multiple indices, in which case something like a Vector3u is more appropriate: it sill has width 32 and every thread is responsible for handling 3 indices. The value 3 is arbitrary here, you could save the indices to a simple Python list. The important part is that every variable has the same vectorization width.

For completeness, it is possible to change the kernel width. But it will break the kernel into two separate ones. We support this mode of execution and refer to it as "wavefront" mode (as opposed to "megakernel" mode). You can read more about how to turn it on and its implications here: https://github.com/mitsuba-renderer/mitsuba3/discussions/586

viswaravi commented 9 months ago

Thank you very much for your response. I am trying to achieve this using Megakernel mode as much as possible. So I want to clarify one final thing before I switch to wavefront mode.

I want each sample to handle more than 3 indices which would mean index list has n values like

start_index = (y*128 + x) * self.n
start_index = UInt(start_index)
indexes = [ start_index, start_index +1 , ... , start_index + n ]

as I need to gather n values from my TensorXf.

indexes = UInt([0,1,2]) 

it brings the Cannot gather from placeholder value Error

My goal is to propagate gradients from rendering back to my TensorXf Values. As the n gathered values would be converted to mi.Color3f and returned from BSDF eval function.

If you have any other suggestion to achieve this in megakernel mode itself, please let me know.

njroussel commented 9 months ago

I think I'm missing a step.

If you keep indexes as a list[mi.UInt32] of size N, you can then have have a list[mi.Float] of size N (just by performing N separate gathers). What's the purpose of merging/concatenating these N values together into a TensorXf ? I'm assuming you have some function like tensor_to_color(arg: mi.TensorXf) -> mi.Color3f. This function should be changed to accept a list[mi.Float].

Let me clarify the following: a TensorXf is just some wrapper over a flat mi.Float buffer (which you can access directly with tensor.array, as you seem to already know). Its vectorization width is therefore always the size of the underlying mi.Float. No matter how it's constructed it therefore cannot have the same width as your input si unless you're only collecting/gathering one value per thread.

viswaravi commented 9 months ago

The purpose of merging N mi.Float values into a single TensorXf is that, I have BSDF characterstics encoded in a Texture (TensorXf) with N channels. I want to query them based on U,V paramaters and finally convert the N channels into Color3f as a output of custom BSDF plugin.

So my requirement is to convert list[mi.Float] into a single mi.Float

As you have suggested I can perform N seperate gathers and have list[mi.Float], is it possible to convert the list[mi.Float] into single flat mi.Float so that it can be converted to TensorXf later ?

A= mi.Float(1)
B = mi.Float(2)
c = [1,2] # Required result

or alternatively, If I gather values through multiple Array3f, can I merge them into single flat mi.Float

A=Array3f( [0,1,2] )
B=Array3f( [3,4,5] )
C = [0,1,2,3,4,5]  # Required result

Note: As I want to use Megakernel mode, I can't use scatter function to achieve this as the variable target to scatter won't be evaluated outside the rendering loop.

njroussel commented 8 months ago

Hi again @viswaravi

Sorry for the delay, we've been busy with a new release.

So my requirement is to convert list[mi.Float] into a single mi.Float

As I said in my previous message, this is only possible in wavefront mode as it changes the vectorization width. The same holds for your second example with the Array3f.

The purpose of merging N mi.Float values into a single TensorXf is that, I have BSDF characterstics encoded in a Texture (TensorXf) with N channels. I want to query them based on U,V paramaters and finally convert the N channels into Color3f as a output of custom BSDF plugin.

None of this requires a TensorXf, it can be done with python list objects. Here's some pseudocode to illustrate what I mean:

bsdf_characteristics: list[mi.TensorXf] = (...) # N items of shape U x V
uv_start_index: mi.UInt32 = compute_start_index(si.uv) 

all_uv_indices = [uv_start_index]
for i in range(N -1 ):
    all_uv_indices.append(next_index_from_current_index(all_uv_indices[i]))
# Now we have all_uv_incies as a N-sized list of mi.UInt32 objects

characterisitics = []
for i in range(N):
    characteristics.append(dr.gather(mi.Float, bsdf_characteristics[i], all_uv_indices[i])

output =  characteristics_to_color3f(characterisitics) # Some function that converts N mi.Float objects into a single mi.Color3f

I thing you're getting vectorization widht and array lenghts confused. An easy way to write code is to just assume you have a vectorization of 1, then what would the code look like? Most likely it will be close to what I wrote above. By trying to merge together mi.Float objects you're changing vectorizations widths when all you'd want is to put them in an array.