ebruneton / precomputed_atmospheric_scattering

This project provides a new implementation of our EGSR 2008 paper "Precomputed Atmospheric Scattering".
BSD 3-Clause "New" or "Revised" License
908 stars 120 forks source link

reduced visibility #13

Open gjaegy opened 6 years ago

gjaegy commented 6 years ago

Hi,

has anyone managed to get decent results when modifying betaM/betaMEx in order to simulate reduced visibility ?

It works great until betaM gets too high (try 0.94 for instance, it should model a 3km visiblity distance), and to be honest, I am not sure why...

Maybe increasing betaM is not the right way of reducing visibility ? Has anyone any input on this ?

thanks a lot ! Greg

mdkdy commented 6 years ago

Can you explain what do you mean by not decent results? It seems that increasing kMieBeta higher than 0.2 also increases horizon seam artifact, I don't know if this can be fixed. Or would you like to modulate fog with height? That would be nice, with some shadow length raymarching.

ebruneton commented 6 years ago

If the visibility is reduced then you need to simulate more multiple scattering orders. With a mean free path of 3km, in average 20 scattering events are needed to travel 60km. The default number of scattering orders is 4 (cf void Init(unsigned int num_scattering_orders = 4); in atmosphere/model.h).

That said, increasing the number of scattering orders to 20 or more will likely cause other issues: timeouts because the precomputations take too long - then you need to split it over several frames, accumulation of approximations errors, etc. The algorithm is somehow adapted to optically thin atmospheres, and I'm not sure it can handle thick ones.

gjaegy commented 6 years ago

Yes, basically as soon as the betaMie coefficient is increased, the horizon seam artifact gets visible again, and doesn't get away whatever the texture parametrization is (even with intel's paper one). I haven't been able to solve it using the suggested pre-computed approach.

I have managed to get rid of that eventually, by basically doing an integration at runtime per-pixel (actually, using a custom low-res version of depth buffer, storing the min/max of all matching hi-res depth samples, computing the inscatter/transmittance at the same time through an integration for those two min/max couples, and interpoling for each hi-res depth sample). By carefully chosing the integration steps, I have managed to get decent results with a low number of steps (16/32). Still, performance are of course worse than the original method, but we can afford that in our case.

I am however still using a precomputed table for multiple-scattering only, since this component doesn't seem to exhibit any horizon artfact and allow me to get rid of the most expensive part of the integration process (so, SS from reatime integration, MS from LUT).

However, I hadn't though that MS computation would require more scattering order, that's a good point. I assume I am happy enough with the result I have (3 orders I think), but I will still try to see if increasing that scattering order makes any noticeable visual difference.

gjaegy commented 6 years ago

Indeed, increasing the scattering orders count has some significant impact. It tends to overflow somehow once the visibility gets very low (i.e. MS values become too big).

Having said that, I wonder whether we couldn't improve the integration precision by using non-linear integrations step parametrization, and use that adaptive step-length in the whole precomputation process. We know particles are mostly in the lower layers of the atmosphere, but that information is not used in the original paper I think.

Not sure this parametrization can be found analytically (at least, not sure I could solve it :) ), but I am pretty sure a better solution is possible.

This would allow to get better results without having to increase the number of steps.

Any thoughts ?

gjaegy commented 6 years ago

for whatever reason, using a higher number of scattering orders (16 for instance, instead of 4) along with a high betaM value (changing default 4.10^-6 to something like 7.10-4) leads to "overbrighten" mutliple scattering component (DeltaS values as high as 30.0 in the last order iteration). I have tried to debug the shader using VS graphics debugger but it seems freeze, so no luck so far... I switched to 32f render targets to make sure half precision RTs are not the issue.

Actually, the more steps I have, the brighters MS is. It makes sense I assume, but it really get way too bright, I might have forgotten something ?

[edit] over-brightness can be solved by increasing the number of inscatter integration steps (INSCATTER_INTEGRAL_SAMPLES) used in the inscatter_multiple precomputation steps (DeltaS generation). So again, I am pretty sure performance/quality could be improved by using non-linear integration...

[edit2] I confirm I managed to get decent results with 32 scattering orders using a simple squared interpolant (i.e. instead of x linearly interpolated between [0..1], I use x², meaning samples closer from each other at the front of the ray). Just need to split pre-computation over a few frames now.

mdkdy commented 6 years ago

@gjaegy Can you provide a hint how did you sample MS without seam? I integrated analitically transmittance and SS (rayleigh and mie) and it works fine. And after integration loop, to complete this with MS:

vec3 MS = multiple_scattering - transmittance*multiple_scattering_p;
scattering = SS_rayleigh + MS;

Where SS_rayleighand transmittance are analitical, multiple_scatteringand multiple_scattering_pare sampled like before from scattering texture in two points, but texture now contains only MS accumulated in MS precomputation loop. But sampled MS still has seam as before.

On the other hand, I experimented with precomputation on CPU with double precision and it doesn't seem to provide better results.

mdkdy commented 1 year ago

When kMieAngstromBeta is set to high value like 0.3 to simulate reduced visiblity, there is a big difference of scattering results above (not intersecting ground) and below the horizon (where there is intersection with the ground).

It seems that the reason for this is vast difference in scattering integration distance. For above the distance is very long but for below it is shorter, yet SAMPLE_COUNT remains the same. I am not sure if this is the issue author had in mind. So the dx step changes vastly. Setting up constant dx step like 400.0 and computing SAMPLE_COUNT in ComputeSingleScattering, ComputeMultipleScattering seems to fix this issue. Here is the difference: In lower picture SAMPLE_COUNT is fixed , upper is dynamic. pas

mate-h commented 1 year ago

Hello @mdkdy thank you for sharing this result, it looks stunning. Could you share some implementation details for the dynamic SAMPLE_COUNT ? I would love to have a chat with you to share some more knowledge.

Here is my result with some Reinhard tonemapping and the Torus Knot geometry with THREE.JS and WebGL: Screenshot 2022-09-02 at 01 02 56 And raymarching SDF with some map data: Screenshot 2022-09-02 at 01 09 41

mdkdy commented 1 year ago

@mate-h that's not complicated, for example:

Length dist = DistanceToNearestAtmosphereBoundary(atmosphere, r, mu, ray_r_mu_intersects_ground);
Length dx = min(dist, 400.0);
int SAMPLE_COUNT = dx > 0.0 ? int(ceil(dist/dx)) : 0;

Horizon seam is getting visible at very high distance when flying up at high density though.