RayTracing / raytracing.github.io

Main Web Site (Online Books)
https://raytracing.github.io/
Creative Commons Zero v1.0 Universal
8.68k stars 852 forks source link

Picking directions on the upper hemisphere #155

Closed vchizhov closed 4 years ago

vchizhov commented 5 years ago

This is more of a suggestion that should provide a better intuition for people learning ray tracing for the first time. As I see it, currently it is not quite clear why the scattering directions need to be picked on a unit sphere offset along the normal (even though that provides a cosine weighted hemisphere distribution) - the geometric and algebraic derivations are both non-trivial, and possibly non-intuitive (this is clearly illustrated by the fact that picking directions inside the ball yields a cos^3 distribution, while picking points on the sphere yields the desired cos distribution). My suggestion is to directly generate points in or on the upper hemisphere, which is a lot more intuitive and can be directly related to the rendering equation and light scattering (even informally). This can be done without inverse transform sampling - by generating point in the unit ball (or on the unit sphere) as was already done, and reflecting all vectors in the lower hemisphere to be in the upper hemisphere, with respect to the normal (requires a dot check per vector). Note that this will induce a uniform distribution (so higher variance, and a normalization factor of 2pi and not pi), and will require a cos(theta) multiplication in the recursion (which will be a good point to introduce Lambert's law which will help motivate the name of the material too).

hollasch commented 4 years ago

Thoughts, @petershirley ?

petershirley commented 4 years ago

I've been pondering this and not converging. Here are some issues:

1) "diffuse" does not need to be Lambertian 2) But Lambertian has some advantages 3) The methods of generating rays on the unit shell is indeed unintuitive

Position 1: the simplest way to generate Lambertian directions should be used Position 2: the simplest way to generate any kind of diffuse rays should be used as VC suggests

I am pretty comfortable with both those arguments-- they both have a point. Anyone have a strong opinion?

vchizhov commented 4 years ago

I wouldn't say that what I propose is necessarily simpler to implement. Sampling points within the unit ball, normalizing those to get points on the unit sphere, and offsetting those along the scattering normal doesn't seem more complex, than sampling points in the unit ball, and selectively reflecting those around the scattering plane (using a dot product).

Theoretically, however, motivating the former approach is a lot harder, and possibly not very intuitive. Algebraically it requires notions of probability theory and knowledge on the properties of the Dirac delta: https://github.com/vchizhov/Derivations/blob/master/Probability%20density%20functions%20of%20the%20projected%20offset%20disk%2C%20circle%2C%20ball%2C%20and%20sphere.pdf

Geometrically it requires some creativity and knowledge of Archimedes' Hat-Box theorem: http://amietia.com/lambertnotangent.html

trevordblack commented 4 years ago

There's a compromise possible here.

I think that having random scattering above the horizon is intuitive.

But I also think that we should eventually turn diffuse into cosine scattering.

We could use horizon scattering in InOneWeekend.

And go back through in NextWeek or RestOfYourLife to change to cos scattering.

I unfortunately cannot think of where in either of those texts I'd happy putting the change.

On Sun, Oct 27, 2019, 06:35 Vassillen Chizhov notifications@github.com wrote:

I wouldn't say that what I propose is necessarily simpler to implement. Sampling points within the unit ball, normalizing those to get points on the unit sphere, and offsetting those along the scattering normal doesn't seem more complex, than sampling points in the unit ball, and selectively reflecting those around the scattering plane (using a dot product).

Theoretically, however, motivating the former approach is a lot harder, and possibly not very intuitive. Algebraically it requires notions of probability theory and knowledge on the properties of the Dirac delta: https://github.com/vchizhov/Derivations/blob/master/Probability%20density%20functions%20of%20the%20projected%20offset%20disk%2C%20circle%2C%20ball%2C%20and%20sphere.pdf

Geometrically it requires some creativity and knowledge of Archimedes' Hat-Box theorem: http://amietia.com/lambertnotangent.html

— You are receiving this because you were assigned. Reply to this email directly, view it on GitHub https://github.com/RayTracing/raytracing.github.io/issues/155?email_source=notifications&email_token=ACNZVKAUJ5BOTIQ3LPWTYGTQQSZYTA5CNFSM4ISKCHG2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOECKRNAQ#issuecomment-546641538, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACNZVKDWLMBP7TELGKS6XRLQQSZYTANCNFSM4ISKCHGQ .

petershirley commented 4 years ago

So thematically the three books are:

1) path of least resistance get results fast 2) oh so you want some better pictures here is some stuff 3) oh so you are hard core? Here is your kamikaze headband

Does that make it more clear (not to me yet)?

trevordblack commented 4 years ago

I guess we have three options for implemention in InOneWeekend

  1. Scattering within the unit sphere offset by normal (previous, simplest to implement)
  2. Uniform scattering above the horizon (most complicated to implement, intuitive)
  3. Cos scattering about the normal (least intuitive, is correct)

Looking at it from those terms, I could genuinely be convinced that any of the above is the right solution for the text.

On Sun, Oct 27, 2019, 11:54 petershirley notifications@github.com wrote:

So thematically the three books are:

  1. path of least resistance get results fast
  2. oh so you want some better pictures here is some stuff
  3. oh so you are hard core? Here is your kamikaze headband

Does that make it more clear (not to me yet)?

— You are receiving this because you were assigned. Reply to this email directly, view it on GitHub https://github.com/RayTracing/raytracing.github.io/issues/155?email_source=notifications&email_token=ACNZVKGUDQZB75TZTIR6KJ3QQT7HJA5CNFSM4ISKCHG2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOECKVFDY#issuecomment-546656911, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACNZVKEGWE2KQAAKOVTLBHLQQT7HJANCNFSM4ISKCHGQ .

vchizhov commented 4 years ago

@trevordblack

I think the implementation complexities of the three are comparable:

//1) Generates normal offset points in the unit ball - cos^3 distributed on the hemisphere
vec3 generate_cosine_cube_point_hemisphere(Sampler* sampler, const vec3& normal)
{
    vec3 in_ball = generate_uniform_point_ball(sampler);
    return normal + in_ball;
}

//2) Generates uniform points in the unit ball, with reflection - uniform distriubted on the hemisphere
vec3 generate_uniform_point_hemisphere(Sampler* sampler, const vec3& normal)
{
    vec3 in_ball = generate_uniform_point_ball(sampler);
    return dot(in_ball, normal) < 0.0f ? -in_ball : in_ball;
}

//3) Generates normal offset points on the unit ball - cos distributed on the hemisphere
vec3 generate_cosine_point_hemisphere(Sampler* sampler, const vec3& normal)
{
    vec3 on_ball = normalize(generate_uniform_point_ball(sampler));
    return normal + on_ball;
}

Theoretically, however, 1) and 3) are considerably harder to prove and intuitively reason about (beyond just blind trust that this is the case). Actually, the proof that they are correct requires further analysis knowledge than for generating the points through the cdf inversion method, so in a sense this is harder to theoretically show than what is presented in the third book. Anyways, if 1) is preferred, the estimator should be divided by a factor of 2/cos^2 if we want to get the target brdf.

The cosine factor that will show up in the estimator, if using uniform sampling, should also be taken into account. I would argue that is a good thing however, since it is currently implicit, because it cancels out with the pdf.

petershirley commented 4 years ago

I don't see any big advantage of 1 over 3.

vchizhov commented 4 years ago

The first variant was what was initially in the book, and the reason for the discrepancy between the first and third book (essentially it makes it so that the Lambertian material is actually a cos^2 material). The third variant is the correction of 1) which produces the desired cosine distribution over the hemisphere (as opposed to cos^3). I proposed 2) since it is trivial to reason about without requiring a university level mathematical background, while 1) and 3) may require some further mathematical knowledge (or at least I cannot think of a way to derive the results from intuition).

petershirley commented 4 years ago

So one issue is whether people need to understand that it is lambertian, as opposed to just some random scattering direction. The brushed metal is also presented as a hack. I certainly agree it is not obvious the spherical shell method is Lambertian... I continue to be on the fence about whether people need to understand it is Lambertian....

hollasch commented 4 years ago

Without fully grokking the theory behind the three approaches, I have to say that I'm very attracted to Trevor's set of three alternates, particularly given their brevity. I think it might be useful to include all three, include a very brief explanation of the approaches, and then just choose one to continue forward with. The first one has the advantage of computational efficiency, leading to faster iterations. All three are models of abstractions of models of abstractions.

This would be similar to the way that we address gamma correction — a very brief introduction, a useful hack for simple implementation (using sqrt), and then move on. These books form a framework for exploration, and I like the idea of leaving ample room for deeper investigation later, prioritizing simplicity and experimentation up front.

(By the way, with three alternate distributions I could easily plug in, I can almost guarantee I'd spend a pleasant hour cobbling up all sorts of random variants just to see what they look like.)

petershirley commented 4 years ago

I love this idea! VC would you be willing to add a paragraph on the relative differences of the three methods?

vchizhov commented 4 years ago

Here's what I came up with, feel free to heavily modify it, or entirely throw it away:

The first method produces random points in the unit ball offset along the surface normal. This corresponds to picking directions on the hemisphere with high probability close to the normal, and a lower probability of scattering rays at grazing angles. This is useful since light arriving at shallow angles spreads over a larger area, and thus has a lower contribution to the final color.

The third method is similar to the first, with the probability still being higher for ray scattering close to the normal, but the distribution is more uniform than for the first method. It is mathematically the ideal method for scattering rays off Lambertian surfaces.  This is achieved by picking points on the surface of the unit sphere, offset along the surface normal. Picking points on the sphere can be achieved by picking points in the unit ball, and then normalizing those. Note the only difference between the first and third method: picking points **in** the unit ball, versus picking points **on** the unit ball.

The second method generates directions in the upper hemisphere with equal probability. This is achieved by picking points in the unit ball, and reflecting those if they are not in the same hemisphere as the surface normal.

If this makes it into the book, I think it should be simplified further and made a lot less verbose, since one of the biggest pluses (at least for me) of the first book was its simplicity and conciseness. Maybe this can even just be put in a code snippet, with minimal comments.

I would also advise leaving out the first method out, since while it does save a normalization, it produces a cosine cube distribution, which needs to be accounted for in some way (division by the pdf) - otherwise the rendered image will be using a cos^2 brdf (and not a Lmbertian). I do believe that the intent in the first book was to not introduce concepts such as pdfs, though I may be wrong. The same problem arises when picking between 2) and 3) - if one chooses 2), we need to re-introduce the cosine term from the rendering equation in the estimator (since it currently implicitly cancels out with the cosine pdf, which will not be the case for the uniform pdf).

More precisely, the 3 methods require a division by the following pdfs respectively (if the images are to be consistent between distributions):

// 1)
// pdf = 2 * (cos(theta))^3/PI

// 2)
// pdf = 1 / (2*PI) = the surface area of the unit hemisphere

// 3)
// pdf = cos(theta) / PI = dot(normal, scattered_dir) / PI

So if we make a decision about any of these, the decision will propagate to the estimator, since currently the pdf division is implicit. 2) is easiest to motivate, because one can argue that the probability of picking uniformly a direction on the unit hemisphere is 1/(2*PI) (with respect to the solid angle measure), the others would be hard to motivate beyond hand-wavy analogies if geometrical and calculus proofs are not used.

trevordblack commented 4 years ago

Created a draft. You can go look at it on tdb/diffuse I'm not 100% happy with it, but it's a first draft.

I used a lot of @vchizhov descriptions, and then motivated their understanding within the text.

It still needs a render with uniform hemispherical scattering.

Feel free to mercilessly tear it apart.

vchizhov commented 4 years ago

Just a minor correction: the distribution is cos^3 for the in ball sampling. One of the cosines cancels out with the cosine from the rendering equation, which leads to a cos^2 brdf (not distribution) - so we are effectively using a cosine squared brdf as opposed to a constant one (it's still isotropic though).

trevordblack commented 4 years ago

Ahhhh. Okay. My spidey sense was tingling on that one, and I wasn't sure if I had got it wrong. Thanks for the clarification.

vchizhov commented 4 years ago

I tested the uniform, cosine, and cosine cube distributions. If you do not take into account the pdfs, you obvious get wrong results. Here are some image differences to showcase this.

Difference between cosine (pdf taken into account) and cosine cube (pdf not taken into account) distributions: [img]https://i.imgur.com/hfyEG1C.png[/img] Difference between cosine (pdf taken into account) and uniform (pdf not taken into account) distributions: [img]https://i.imgur.com/6YOgESO.png[/img] Difference between cosine (pdf taken into account) and uniform (pdf taken into account): [img]https://i.imgur.com/Fydjfit.png[/img]

Note that only the difference between the images where the pdf is correctly taken into account have no significant difference. The main difference there is the noise level (obviously the cosine image has variance that is a lot lower). Cosine: [img]https://i.imgur.com/G9lNNTZ.png[/img]

Uniform: [img]https://i.imgur.com/dm23hBO.png[/img]

Unfortunately one cannot even produce a correct image without singularities with the cosine cube distribution (for a lambertian brdf), because of the required division by cos^2 which can be close to zero (breaking floating point precision): [img]https://i.imgur.com/RcFtRFZ.png[/img]

One can think of this as introducing needless variation.

For comparison (in order to avoid the singularity, but show that the methods are equivalent), one can use a cos^2 brdf. In that case the cosine cube method is optimal (in terms of variance).

Cosine cube: [img]https://i.imgur.com/XhawuRl.png[/img] Cosine: [img]https://i.imgur.com/FZdVsXc.png[/img] Uniform: [img]https://i.imgur.com/5GzACxY.png[/img]

Note the variance increasing from the cosine cube (ideal distribution for cos^2 brdf) to the uniform distribution.

trevordblack commented 4 years ago

We can't introduce the notion of pdfs into InOneWeekend. This analysis may be a value add to RestOfYourLife.

vchizhov commented 4 years ago

Then one cannot have two different sampling strategies in the code (or it would require some hand-wavy arguments in the absence of mentioning a pdf). So either the cosine distribution should be chosen, or the uniform one. If the latter is chosen (which will result in a higher variance, as illustrated), then a cosine factor has to be added in the rendering equation (and also a multiplication by 2).

P.S. The above "analysis" is not meant to make it into the book, it's just there as an experimental confirmation to the theoretical results. And while some images may be useful for illustrative purposes, maybe it will be too much information.

trevordblack commented 4 years ago

The branch tdb/diffuse has been merged into the development branch.

I forgot to include the "Resolves #155" text in the final commit.

However. It may be prudent to keep this issue up until v3.0.0 is out.

hollasch commented 4 years ago

Rather than maintaining a mutating long-lived issue, if you are aware of any remaining deficiencies, please create a new focused issue describing these, reference this issue for history, and close this guy out. It's getting difficult to follow at this point.

trevordblack commented 4 years ago

I'm comfortable with this issue, and believe it to be closed.

@aras-p We're closing out this issue. You had some thoughts on rolling back your changes. If you have additional issues, please create separate github issues