mmp / pbrt-v3

Source code for pbrt, the renderer described in the third edition of "Physically Based Rendering: From Theory To Implementation", by Matt Pharr, Wenzel Jakob, and Greg Humphreys.
http://pbrt.org
BSD 2-Clause "Simplified" License
4.91k stars 1.19k forks source link

about the formular 16.1 in pbrt-v3-book #336

Open LittleTheFu opened 2 months ago

LittleTheFu commented 2 months ago

here is the link below https://www.pbr-book.org/3ed-2018/Light_Transport_III_Bidirectional_Methods/The_Path-Space_Measurement_Equation

I marked it with red lines,I don't know why AFilm is missing 捕获

rainbow-app commented 2 weeks ago

Yes, I'm late, been 2 months since you asked.

Let me say that this whole bi-dir topic is very vague in the book. After trying to understand it, I gave up and turned to Veach's PhD thesis. He describes the algorithm much better. Unfortunately he's too abstract, and doesn't provide any concrete examples.

Anyway, I can't comment on those integrals and answer your question.

However, if you want to understand how we get to the result (the importance expression), I think I can help you. I can write the proper (in my opinion, I'm a physicist by education) derivation -- from the measurement. Do you want to understand that?


For now I'll briefly describe why their derivation of expression for importance W is vague. Their two key arguments are:

Weird arguments, in my opinion, but ok, whatever. What's missing is the demonstration that the so defined W really measures radiance. After all, it is importance sole purpose! In fact, their W doesn't measure radiance (unless the image is 1x1, a single pixel); the code nevertheless works.

LittleTheFu commented 2 weeks ago

Thank you for your reply. Today I still can't get the point of the meaning of "W". I searched this concept, but still can't understand it. It would be very nice if you could explain it in detail.


However, if you want to understand how we get to the result (the importance expression), I think I can help you. I can write the proper (in my opinion, I'm a physicist by education) derivation -- from the measurement. Do you want to understand that?

yes, I want to understand it !!!!!

rainbow-app commented 2 weeks ago

Let me repeat, I found pbrt-book very vague for bdpt, so I use Veach's formulas and his notation (splitting of W into W^0=spatial part and W^1=directional). Don't let his measure-theory stuff scare you -- I found it very easy to ignore it.

Assume camera is pin-hole = a point (I didn't consider realistic cameras). Assume it measures some radiance L from a remote area light.

See eq. 8.7 (p.223) in Veach. The first term after second equals sign gives us the measurement in our case. We'll derive importance expression from equating it to L.

How camera is set up:

First consider only 1 pixel on the sensor.

importance-github s=area of 1 pixel, S=area of remote surface, they are related as shown.

Now that term becomes: integral { L G W^1 C } dA(x_0), it must be =L to measure brightness=radiance. This integral is only over the small remote surface S.

You should now be able to follow the simple arithmetics in the image.

Now consider the full sensor, MxN pixels.

This is bdpt=bi-dir, so for each pixel we start a light subpath, and get splats. So brightness of the scene will be M*N times larger than it was for 1 pixel. So we need to compensate this increase by dividing by M*N. This corresponds to division by full area sensor A instead of s in the expression for C. And we get the expression from the pbrt textbook.

(this last argument is totally missing from the pbrt textbook, which is very sad)

Few additional comments

Hope this is detailed enough.

LittleTheFu commented 2 weeks ago

Thank you for your comment, it's very kind of you.

But I'm stuck in the middle,which is how to get the W^0. I know the definition of W^0,W^1,like this: we01

But I don't know how to get that W^0 as that in this step: w

rainbow-app commented 2 weeks ago

Your second equation is good, but written the other way around. Should be W=W^0*W^1, it's just how we split the W (there's not much to think about it).

The first one is good angle-wise (all cosines get cancelled). Magnitude-wise -- no. If we consider only 1 pixel, there's no integral. You do write the integral, so it seems you consider the final W for whole sensor. Can't do it: no magic jumps please. You need to derive it in two steps: (1) 1 pixel, (2) whole sensor.

Neither of your two equations can be used as definition. W^0 and W^1 are not defined, they are derived.

Now to your question.

W^0 is derived from the way how you decide to model the camera. It doesn't follow from any equations. See "Camera position is modeled as a...". The general expression for W^0 (C*delta-fn) follows from those words. I'm sure there can be other approaches. I just picked the simplest (to my taste) model that could be fit into Veach's integrals.

rainbow-app commented 1 week ago

I guess, that (or something else) is still not clear.

The measurement eq. (Veach 8.7) gives us the freedom to choose a surface (existing in the scene, or introducing a new one) and a function (W). For a pinhole camera we don't need that much freedom: there's nothing to integrate (=average with a weight W). Well, almost don't need: we still would like some averaging over the pixel for anti-aliasing purposes. But roughly speaking, yes, we don't need that freedom.

Remember, we are at step 1 out of 2 = consider 1 pixel only.

The pixel value is determined by energy from a very narrow (again, no integral = no averaging = pixel is small) solid angle cone. The cone is determined by position of its origin (this is camera center) and position and size of the pixel.

Now there can be two approaches:

  1. Fix origin, and integrate over the pixel.

  2. Fix pixel, and integrate over surface that hosts the cone origin.

1st approach. Introduce a small sensor surface, and integrate over it. We'd choose the spatial W^0 to be similar to a delta-fn for that pixel: approximate the integral by a product of integrand and small pixel area. The camera center is fixed somewhere else (behind the sensor = off the surface), but it doesn't matter because camera is point-like anyway (point-like, yes, but in implementation we still can set d_s=1 -- it doesn't matter).

2nd approach. Introduce a small surface to host the camera center, and collapse the integral with a delta-fn (this time really delta, at a mathematical point) in W^0 (this is our freedom). And no integral over sensor surface. Well, roughly speaking: there will be integral, but it's a different integral, not like Veach 8.7, we'd approximate it as above.

We can choose either of the approaches, each does its work (=measures radiance for the "j"-th pixel) properly. It can be easily seen that both lead to same result. I chose 2nd originally.

(in both cases we equip the sensor with small ideal lens; and nothing of this participates in ray tracing)


I don't mind you taking breaks, or read Veach, or just live your life, but I was hoping that you confirm that you resolved it.

UPD. Imagine you are given the task of finding such a surface and a function W that would give (measure) radiance for a single pixel in a pinhole camera. Try to do it on your own. Most likely you'll end up with the same arguments, and the same expression for W^0.

LittleTheFu commented 1 week ago

I'm sorry for taking so long time.

After reading your post, I think I finally understand, but I'm not entirely sure. Let me repeat it to see if I've got it right.

The "W" we want is the intergal over the red region(cone?). The yellow line is one line carries its own weight.And this is direct function(or W^1, or detla-function). And pixel area is W^0. Because the pixel is so small,we can get the intergal simply by multipying the weight yellow carries with pixel area.

See the markers in the picture. The real meaning of "delta" here is that, given a point on the pixel, we can get the only one weight line.(As in the picture,the blue circle specify the only yellow line).

w

rainbow-app commented 1 week ago

Something's right, something's wrong.

Because the pixel is so small,we can get the intergal simply by multipying the weight yellow carries with pixel area.

This is very right (very simple idea really). However previous details (what exactly you mean by "the weight yellow carries") are wrong.

The "W" we want is the intergal over the red region(cone?).

No, W is a function. The measurement we want is the intergal over the red pixel.

And pixel area is W^0.

Very much no.


Most importantly, you seem to miss that

The measurement eq. (Veach 8.7) gives us the freedom to choose a surface (existing in the scene, or introducing a new one) and a function (W).

Slowly: We measure something. As an integral. Over a surface. With a weight inside the integral. Notice: surface+W are used together.

It's meaningless (in general, and in this case) to tell values of a function without specifying where it is defined. What is the surface of integration (that enters in Veach 8.7) in your picture? Here we come to my previous post, and the 2 approaches.


Another good thing in your post is this strategy how to resolve it. You re-read my posts (because I have already written pretty much all I could), then ask questions, then back -- in a loop until cleared.

LittleTheFu commented 6 days ago

Slowly: We measure something. As an integral. Over a surface. With a weight inside the integral. Notice: surface+W are used together.

I still seem to have a lot of confusion, so let me try to resolve this now.

  1. W is not the thing only can be found from camera,but anywhere in the scene.Because I can image camera as a light source,and it emit W(just like light emit rays).Then W bounce around in the scene,and finally it reaches a state of equilibrium. wight_emit_bounce Suppose we can get W by postion and direction.Like W(pos,dir), the param "pos" can be any point in the scene,and "dir" can be any direction. So we need another help function which descriped below,in step 2.

2.(I doubts about this step, but let's go with it for now) At any surface in the scene,we can get a function just like the brdf,but this one fouces on W. W_BRDF


3.combined with function "g" and W emitted from camera,we can finally get any W in any postion and dirction.


4.just like step 1,but this time, the main character is light.It emitted and bouced around,and finally reached a state of equilibrium. light_bounce


5.like step 2,we can get brdf from surface(because of the nature of materials). L_brdf


6.Now, things have changed. We must combine "W" and "L" and "f" to get the final result. RESULT

In the world without "W",things go like this : L_forular

But now when "W" joins the game,it looks like this : WITH_W_L

rainbow-app commented 5 days ago

camera ... it emit W

I looked at your post very briefly. But long enough to say that I won't try to understand anything there. You are totally ignoring my derivation and going the pbrt way. Ok, fine with me. But you are on your own. I can't help you with that.

I don't claim this dual photography is wrong (I used the word "vague" in the first post). Veach also has a chapter on adjoint. I certainly know bra-ket in quantum theory, and co-vectors in general relativity (this is a basic concept from linear algebra), so there may really be some meaning behind it. It just looks very unnatural to me in this case (the camera in computer graphics), so I didn't look at it.