BarthPaleologue / volumetric-atmospheric-scattering

A simple implementation of volumetric atmospheric scattering using babylonjs.
https://barthpaleologue.github.io/volumetric-atmospheric-scattering/dist
Apache License 2.0
21 stars 6 forks source link

Multiple issues #6

Closed Yincognyto closed 1 year ago

Yincognyto commented 1 year ago

First, let me congratulate and at the same time thank you for your work - being relatively simple (compared to other scary shaders) and done in BabylonJS, it was pretty much the only example I could use to port it to the similar ThreeJS. I hope it's ok, and I'll make sure the proper credits and license (still not sure if more is needed) will be present in the final work, when and if it will be completed. For the record, my use case is including it in an Earth skin I made for the free Rainmeter software, where I can use web technology via a WebView2 capable plugin (though the code will be able to run in a local server webpage as well).

That being said, here are the issues I noticed with your shader: 1) This has been already mentioned elsewhere, but the random noise here, while creating a reasonable effect for thicker atmospheres, is making it all grainy / pixelated when the atmosphere is thinner (around 0.02 of Earth's radius like in reality, i.e. 51 to 50 radiuses) - as usual, the squared artefacts are solved by increasing the sphere segments number in ThreeJS too, so I multiplied the noise value by 0.0 in effect removing it and it's just fine: Atmosphere - Grain 2) Apparently, if you set the BabylonJS loop to run just once, simulating a single frame or a stopped animation, by commenting line 117 and 123 from here, stuff gets messed up (more on that later on). Not an expert in BabylonJS by any means, so it may be a setting or something that I'm not aware of, but again, it doesn't happen in my nearly identical ThreeJS implementation: Atmosphere - Frame 3) The most important thing, for which I hope there is a reasonable solution or even a hint on how to correct, is the fact that when choosing a bit more extreme values of camera far and near (and, of course, adjusting the radiuses, camera position and so on accordingly), something similar to the image above or downright an invisible or circularly truncated atmosphere happens. Now I realize that BabylonJS probably sets these values automatically, but in case the user wants to modify them, or, like me, has to set them to realistic values, e.g. planetRadius = 6371, atmosphereRadius = planetRadius 1.02, lookFrom = 42164, cameraFOV = 17, cameraNear = 0.000001, cameraFar = 720000 planetRadius, corresponding to a real sized Earth in km, geosynchronous orbit, planet occupying full view, minimal near plane and a frustum able to fully include a back-faced celestial sphere, the unwanted surpise of not having a visible atmosphere happens. My guess is that this is related to either the depth or the world coordinate calculation and usage in the other functions, but I've not been able to identify exactly where - somewhere along the line nonunitar values of the camera near (and consequently, different camera distance from the Earth) affect the result, and you can notice this even when zooming out far enough. These can be tested even without modifying the camera near, by using the following in the shader here, instead of this:

// compute the world position of a pixel from its uv coordinates and depth
vec3 worldFromUVDepth(vec2 pos, float posdepth)
{
  vec4 wpos = inverse(view) * inverse(projection) * (vec4(pos, posdepth, 1.0) * 2.0 - 1.0);
  return wpos.xyz / wpos.w;
}

The function, while it may not be exactly what is needed in this case, is replicating the "stages" that the atmosphere is going through, based on the camera distance and the near value. I feel the solution is within reach, some proportionality (or lack of) to the (cameraposition - near) thing, but maybe you know better.

P.S. If you ever think of doing a ThreeJS version of this, the way to solve a logarithmic depth buffer (not sure if it exists in BabylonJS) is like this. Or, I can directly share a fiddle with my simple port to it, if you like. Cheers!

BarthPaleologue commented 1 year ago

Hi there! Thank you for the kind words, i'm glad this shader can be useful to you :) For the credits and license, i'm pretty sure there is no problem as long as my name is written somewhere haha.

It makes sense that increasing the geometry details on the sphere solves the issue of the visible tiling, but it will cost you performance wise. One faster solution to this problem also is to calculate the distance through the atmosphere using a ray/sphere intersection function (I might update the shader to include it anyway) rather than the depth buffer of the camera. You would get a perfectly rounded sphere without artefacts at a lower cost. But if increasing geometry is not a problem then you don't need to do anything else indeed ^^

For the second point, I would say the problem in BabylonJS is that the depth buffer for the first frame is not initialized properly (I have made a mistake in my code, i will fix it) and therefore the shader does not account for the presence of the planet. It makes sense this issue is not present in ThreeJS with a properly initialized buffer.

The last point is indeed the harder one. As far as I know, when setting cameraNear to a very small number, the projection matrix will become imprecise (because of the formula: https://gamedev.stackexchange.com/questions/120338/what-does-a-perspective-projection-matrix-look-like-in-opengl) and it will create visual artefacts because of the precision issues. I will try to iron a few things to see if I can make it work with the code you gave me. Still I don't get why a user would want such a low cameraNear if you only want to display the planet on the whole screen from afar.

If you are using a logarithmic depth buffer (sadly there is no log z-buffer for postprocesses in BabylonJS as I write this), then cameraFar should pose no more issue beyond the problems with cameraNear.

Setting the FOV to 17 is quite small and it will magnify the imprecision issues unfortunately. But if you really need to set it to 17, then I think the problem boils down to cameraNear issues and the precision of your z-buffer.

I will look into it further this week and tell you what I find, I hope I can come up with a good solution :)

Thank you the logarithmic depth buffer link, I might port it to ThreeJS after all :)

BarthPaleologue commented 1 year ago

Okay I found why your updating worldFromUVDepth is slightly off. The z component is distorted, let me explain.

Using my formula, I compute the position of the pixel on the near plane and then I use this position in camera space scaled by the depth to know the maximum distance in the scene for occlusion. Note that the length of the position in camera space is not constant as the frustrum as a pyramidal shape.

Frustrum

The consequence is that using my formula, a plane in NDC will result in a plane in camera space. This is not preserved using the unprojection on the depth as the transformation is non uniform and causes a distorsion that explains the error.

Here is the formula corrected to account for the non uniform scaling effect:

vec3 worldFromUVDepth(vec2 UV, float depth) {
    vec4 ndc = vec4(UV * 2.0 - 1.0, 0.0, 1.0);
    vec4 posVS = inverseProjection * ndc;
    posVS.xyz *= remap(depth, 0.0, 1.0, cameraNear, cameraFar) ;
    vec4 posWS = inverseView * vec4(posVS.xyz, 1.0);
    return posWS.xyz;
}

Notice how i introduce the depth only later on in the calculation in a similar way as using my previous method. This formula is way cleaner I think and takes the best of both our worlds.

Unfortunately I don't think it will make a difference regarding your issues with cameraNear, but you tell me :+1:

Reference: https://stackoverflow.com/questions/17751822/do-normalized-device-coordinates-with-the-same-z-value-lie-in-a-plane

Yincognyto commented 1 year ago

Thanks for answering!

One faster solution to this problem [...] I might update the shader to include it anyway [...] a perfectly rounded sphere

If that solution involves making the shader more complex than it is, or is incompatible with having terrain on the sphere, maybe it would be better to reconsider. Personally, I love that it's relatively easy to understand and it works with displaced terrain.

The last point is indeed the harder one [...] to see if I can make it work with the code you gave me [...]

Hm... I don't know. For the record, the code I posted is not necessarily the one to make it work with, it was meant more like a test function to better see the effects I was talking about (you correctly identified this above, before I had the chance to explain). I still use your code for the world position and it works fine, with the exception of setting a nonunitar camera near value. Basically, it works as if the near value is always 1, which in practice is not always the case.

So, even though you know best, I doubt it's about precision since it produces the same effect if setting the camera near to values greater than 1 (e.g. 2 and such) and precision shouldn't be an issue then as far as I can tell. I think it's more about the remapping part (but not necessarily the function, which is correct) and the influence the camera to near value has on the result. It's acting as the depth or world position are changing when changing the near to camera distance; they do, of course, but in the code you'd want the values you use to stay the same in order to produce the correct effect (that's why I mentioned proportionality or lack of earlier). How you do that is your choice, but obviously it should come natural and not as a way to "cheat" the initial values.


As for ThreeJS, here is a starting point (only the 3 functions regarding the logarithmic part before main() are added). You can easily test what I mentioned earlier when it comes to the camera near values. Hopefully it would be of some use for solving the issue. I had some terrain generation function based on the same height map and some camera boundary / collision code as well, but for the sake of simplicity, I removed them from this version.

Yincognyto commented 1 year ago

Here is the formula corrected to account for the non uniform scaling effect:

vec3 worldFromUVDepth(vec2 UV, float depth) {
    vec4 ndc = vec4(UV * 2.0 - 1.0, 0.0, 1.0);
    vec4 posVS = inverseProjection * ndc;
    posVS.xyz *= remap(depth, 0.0, 1.0, cameraNear, cameraFar) ;
    vec4 posWS = inverseView * vec4(posVS.xyz, 1.0);
    return posWS.xyz;
}

Yes, your original function (worldFromUV) was the way to go. This one, since it's more or less based on the function I posted simply to see the unwanted effects better is not what we need. Sorry for the misunderstanding.

Yincognyto commented 1 year ago

Unfortunately I don't think it will make a difference regarding your issues with cameraNear, but you tell me 👍

Okay, it seems I got somewhat closer. If I change your original worldFromUV(vec2 pos) function:

vec3 worldFromUV(vec2 pos) {
    vec4 ndc = vec4(pos.xy * 2.0 - 1.0, -1.0, 1.0); // get ndc position -1 because i want every point in the near camera plane
    vec4 posVS = inverse(projection) * ndc; // unproject the ndc coordinates : we are now in view space if i understand correctly
    vec4 posWS = inverse(view) * vec4((posVS.xyz / posVS.w), 1.0); // then we use inverse view to get to world space, division by w to get actual coordinates
    return posWS.xyz; // the coordinates in world space
}

to this (notice the progression in the first comment):

vec3 worldFromUV(vec2 pos)
{
  vec4 ndc = vec4(pos.xy * 2.0 - 1.0, - (cameraNear * 2.0 - 1.0), 1.0); // Near=1 Z=-1, Near=2 Z=-3, Near=3 Z=-5, Near=4 Z=-7, Near=5 Z=-9, etc.
  vec4 posVS = inverse(projection) * ndc; // unproject the ndc coordinates : we are now in view space if I understand correctly
  vec4 posWS = inverse(view) * vec4((posVS.xyz / posVS.w), 1.0); // use the inverse view to get to world space, division by w to get actual coordinates
  return posWS.xyz; // the coordinates in world space
}

it's sensibly better. The atmospheric outcome isn't identical for different camera near values as it can be seen below (maybe some other function has to be adjusted as well, or a better formula should be used?), but it's along the lines of what should be expected, with the changes much less obvious than before, when the atmosphere was missing almost entirely: Camera Near VS WorldFromUV2 So, it's more or less about including the camera near value into certain calculations. The result will change with depth (e.g. when zooming in or out), so depth should probably be a part of such formulas too, in order for the result (the one from outside the atmosphere, as far as I can tell) to not change on different camera near / depth / zoom values.

  1. Since we're at it, the fourth issue is blinking on zooming out far enough and spinning the earth with the mouse. It's similar to z-fighting, but it can't be that since it's present even when I set logarithmic depth to true in the ThreeJS implementation (which should avoid the z-figthing). Not an overly serious issue, but I thought you should know about it.

P.S. Of course, this should NOT be commited to the code at this stage.

BarthPaleologue commented 1 year ago

Hello again, i had time to try a few things on my end! I tried to reproduce your issue with the values you gave me. I'm sorry i did not do it sooner.

Here is what I did, following your first message :

Using a 32bits linear buffer, I had to remove the clouds because of z-fighting, only leaving the planet and the atmosphere. I got that result:

screenshot_23-1-31_19-12

So I think the issue might come from your ThreeJS implementation rather than from my shader (I'm using the updated version from yesterday, i changed a few minor things but it could be it).

One thing I can think of is that my planet is at the center of my scene, which can reduce a few floating point imprecisions, maybe try keeping your earth at the center.

Therefore i don't think we need to hack the unprojection formula as setting a Clip space z coordinate < -1 would result in some of the frustrum not being rendered properly.

I hope it helps, and if I missed anything, please tell me :+1:

Yincognyto commented 1 year ago

You're absolutely right, your adjusted shader almost solved the camera near issue (aka issue number 3) - very well done, and a ton of thank you's! The almost is because - even though I'm not sure how they're connected to it - a few problems remain:

a) the edge between the land and the atmosphere now has a band there, when setting the terrain; this doesn't happen if there's no displaced terrain, and it didn't happen in the previous / original implementation; it's almost like the shader misses the said edge by some value and starts coloring the image while still "on the ground"; I know that you wrote in the code comment to not use the final if when dealing with landmasses, but commenting that just brings us back to the truncated atmosphere all over again (left: new shader with terrain; right: new shader without terrain): Atmosphere Shader - Terrain Band

b) for some reason, while you were right that it now works right off the bat (even in my ThreeJS implementation, no depth or world from UV additional functions are needed now, nor for the logarithmic depth buffer, so a big win), when testing with the realistic values you mentioned above, the color of the atmosphere is affected; after some debugging, it turns out that setting the planet radius to around 100 times less (e.g. say, 63.71 instead of 6371, near and far don't seem to matter here) and of course zooming in so it can be seen alleviates the issue; can you see if you can replicate this proportional tinting with bigger radiuses in BabylonJS cause I couldn't tell 100% from your screenshot above (left: planet at 6371 radius; right: planet at 63.71 radius zoomed in; same settings used otherwise): Atmosphere Shader - Radius Tint

Other than that, everthing seems to work brilliantly, I don't mind some very discrete z-figthing in the top right image above, the important part is that the atmosphere is drawn entirely and color accurately in the vast majority of cases now. Now, I don't even have to worry whether the logarithmic depth buffer is on/off, or use the worldFromUV() variations in my ThreeJS implementation (the last formula had some truth in it, see this, this and this, since it was just reversing via minus the basic NDC formula in the case of Z to bring stuff into the near plane as in the comment). Those functions were needed for the earlier version of the shader, to prevent a truncated or invisible atmosphere, but now with your updates things are handled very well indeed. By the way, the planet was already at origin in my code and I was already using {type: THREE.FloatType} (I'm guessing, a maximum precision depth texture buffer in ThreeJS), so I took precautionary measures in that regard.

Big like from me on your work! I'm curious if you have any ideas for the last two issues that remained. Hopefully it won't be about the lack of precision (again)... :)

P.S. No need to apologize for anything, I was busy as well invaestigating what can be done about these things, in my own rather rudimentar way.

BarthPaleologue commented 1 year ago

I will investigate a) more deeply, can you share with me the parameters that produced the white band artefact?

As for b) I can help you because it is an issue I ran into myself last year. Basically the problem comes from the integration part of the shader (with the for loops to sample the density of the atmosphere). As the atmopshere gets bigger, the number of sample points stays constant for performance reasons. At large scale, it breaks down and changes the shade of the atmosphere.

I'm currently working on a look up table that would remove the need for the realtime integration and that would solve the problem but it's far from ready.

So I have a few temporary fixes for this.

float densityAtPoint(vec3 samplePoint) {
    float heightAboveSurface = length(samplePoint - planetPosition) - planetRadius;

    float height01 = heightAboveSurface / (atmosphereRadius - planetRadius); // normalized height between 0 and 1

    // FIXME: this sould not be a thing
    height01 = remap(height01, 0.0, 1.0, 0.4, 1.0);

    return densityModifier * exp(-height01 * falloffFactor); // density with exponential falloff
}

This is what I use for my own planets and it gives a convincing result, but be aware you will need to tweak it for every size change so it might not fit your usecase if you let your user decide the size of the planet sadly.

Since you took care of the precision of the depth buffer the precision should no longer be considered an issue haha

I will update you when I get to the bottom of issue a)

Yincognyto commented 1 year ago

I will investigate a) more deeply, can you share with me the parameters that produced the white band artefact?

Apart from the size, distance, camera and some basic scale parameters I use in my implementation, I never change the other (atmospheric shader) parameters, really - they stay at their defaults (i.e. falloff = 15.00, sunint = 15.00, scatter = 1.00, densmod = 1.00, rwave = 700.00, gwave = 530.00, bwave = 440.00).

Unfortunately, I can't provide a BabylonJS example because I'm obviously more familiar with ThreeJS so I work and test there, but from my previous tests that led me to open this issue(s) here, as far as the shader is concerned, the results should be nearly identical (the depth thingy was the exception, because of the functions I had to add to make the previous shader version work for ThreeJS - which is not the case anymore, thanks to your new shader version). That's why I wondered if you can replicate it in BabylonJS.

I've forked my previous no-terrain fiddle to a terrain-enabled one to help you better visualize and understand what happens - you can find it here (I still inverse matrices in the shader and use camPosition instead of cameraPosition because of a name conflict with some ThreeJS provided default uniform, but other than that it's unchanged compared to GitHub). Just left drag centrally from bottom to up to spin the "earth" with the "south pole" facing you, right drag downwards to pan the "planet" properly and then scroll up the mouse wheel to zoom in until you enter the atmosphere tangentially and see the horizon closer, the band thing should be obvious then. It's proportional with the atmosphere radius, and you can see that if you change the global scale variable in the fiddle to, say, 99. The terrain is otherwise just textbook vertex displacement, based on the same texture used as a height map.

As for b) I can help you because it is an issue I ran into myself last year. Basically the problem comes from the integration part of the shader (with the for loops to sample the density of the atmosphere). As the atmopshere gets bigger, the number of sample points stays constant for performance reasons. At large scale, it breaks down and changes the shade of the atmosphere. [...]

  • Something that works even better I would say, but is more ugly from my point of view is changing the densityAtPoint function to something like:
float densityAtPoint(vec3 samplePoint) {
    float heightAboveSurface = length(samplePoint - planetPosition) - planetRadius;

    float height01 = heightAboveSurface / (atmosphereRadius - planetRadius); // normalized height between 0 and 1

    // FIXME: this sould not be a thing
    height01 = remap(height01, 0.0, 1.0, 0.4, 1.0);

    return densityModifier * exp(-height01 * falloffFactor); // density with exponential falloff
}

This is what I use for my own planets and it gives a convincing result, but be aware you will need to tweak it for every size change so it might not fit your usecase if you let your user decide the size of the planet sadly.

I was hoping it wouldn't be an integration problem, personally I want to avoid look up tables if possible since I value flexibility, but anyway - I was not aware that they could "fix" things, I only knew that they improved performance via static values. Technically, I don't let the user change the planet size in the "final product" (which is yet to be final, just like you mentioned, haha), but I do change it myself to make sure things work in every scenario (which is something I care about when I do something). Thanks for the alternatives, I will try adjusting the said function and keep you posted with how and if it works!

I will update you when I get to the bottom of issue a)

Alrighty then ;) Take your time to see things clearly, there's no rush. I've been attempting what you did for a while (and in the meantime I already had a pretty decent alternative based on joggling with color gradients based on normals, the sun and view directions on an atmospheric mesh system) since a physical based approximation should increase realism dramatically, but until I found your work all the other solutions were either too complex, hard to find and / or understand, made for other environments (like Sebastien Lague's videos for Unity), or involved multiple shaders for ground, sky, or space. Your shader is just in the right spot when it comes to these, once things are drawn properly for every scenario I need to spend some time wrapping my head around everything that it does (and why it does), as that might help in finding solutions to make it even better or fix whatever remaining issues.

Yincognyto commented 1 year ago

As for b) I can help you because it is an issue I ran into myself last year. Basically the problem comes from the integration part of the shader (with the for loops to sample the density of the atmosphere). As the atmopshere gets bigger, the number of sample points stays constant for performance reasons. At large scale, it breaks down and changes the shade of the atmosphere. [...]

  • Something that works even better I would say, but is more ugly from my point of view is changing the densityAtPoint function to something like:
float densityAtPoint(vec3 samplePoint) {
    float heightAboveSurface = length(samplePoint - planetPosition) - planetRadius;

    float height01 = heightAboveSurface / (atmosphereRadius - planetRadius); // normalized height between 0 and 1

    // FIXME: this sould not be a thing
    height01 = remap(height01, 0.0, 1.0, 0.4, 1.0);

    return densityModifier * exp(-height01 * falloffFactor); // density with exponential falloff
}

[...] but be aware you will need to tweak it for every size change so it might not fit your usecase if you let your user decide the size of the planet sadly.

Actually, it happens at smaller scales too, something I forgot to mention earlier. So, it doesn't matter if it's colored fine at, say, radius 55, it will become "tinted" both when decreasing and increasing the radius, obviously in opposite directions. Without pretending that it's perfect because I agree with you that such things are ugly (this is what I referred to when talking about "cheats" in one of my earlier replies, by the way), this is a reasonable approximation of what the result should be, without requiring manual changes to the code each time the radius is modified (of course, it can be refined if needed):

float densityAtPoint(vec3 densitySamplePoint)
{
    float referenceRadius = 55.0; // the planet radius at which the atmosphere looks as it should
    float heightAboveSurface = length(densitySamplePoint - planetPosition) - planetRadius; // actual height above surface
    float height01 = heightAboveSurface / (atmosphereRadius - planetRadius); // normalized height between 0 and 1
    height01 = remap(height01, 0.0, 1.0, log(planetRadius / referenceRadius) / (6.5 * log(10.0)), 1.0); // fix
    float localDensity = densityModifier * exp(-height01 * falloffFactor); // density with exponential falloff
    localDensity *= (1.0 - height01); // make it 0 at maximum height
    return localDensity;
}

I used 55 as the "reference radius" of the planet that allows the atmosphere to be colored properly based on just my eye so I could be wrong, but it can be set to other values as well, according to the user preference (e.g. if it's passed as an uniform called referenceRadius from JS - which would be strange, of course, but then, it's certainly better than manually modifying the code each time the radius happens to be different). The lower limit of the remap function is basically:

log(planetRadius / referenceRadius) / 6.5

where log() is the base 10 logarithm (GLSL seems to have only the natural logarithm function, so I had to use the log(base, x) = ln(x) / ln(base) identity to replicate the outcome, hence the slightly more extended formula in the shader). It's not downright 100% accurate yet since there are still some very subtle differences of, say, max 5% in the shading that probably account for or can be controlled from somewhere else, but in essence, plotting 5 points of what seemed to me as the right values of the remap's lower limit for 5 different values of the radius helped me simulate the graph of what looked to be close to the said logarithmic function. If you're wondering, the 6.5 should not be 2 * PI, I already tried that and it was a bit off.

P.S. Didn't try to see how the rest of the parameters passed as uniforms (e.g. density falloff and such) affect the result, but it would be logical that the 6.5 value would then be a bit different too. Anyway, let's call the b) issue partially solved (in a hacky way, obviously). OR, now that I think about it better, maybe the planet radius does in fact influence the color of an atmosphere like in the original / unhacked result, since, after all, both the light and the camera rays do pass through more gas on a larger planet - it just happens to be that the expected outcome suits a certain radius (which one, I don't know, but it it should be Earth's real one) better... too bad we don't have another actual example besides Earth to know it for sure.

BarthPaleologue commented 1 year ago

I just tested your log transform of height01 and I must say it works like a charm! (I even tried changing the 6.5 to 7 and it worked all the way up to 5000e3 of radius haha) I will include it in the code and add a link to this issue as well. I don't think we will be able to better solve issue b) without the lookup texture. I don't think the look-up texture would make us loose a lot of flexibility, there is potential to apply the falloff factor after the integration (I need to work the maths out of this lol).

I don't think the unhacked result is very realistic. In our solar system the atmospheres do not behave in such a way as much as we know (the biggest one Jupiter is 11x the size of earth).

I didn't have much time to investigate issue a), maybe this weekend. The fiddle you shared will help a lot! I will keep you updated.

BarthPaleologue commented 1 year ago

One thing I tried though was commenting the if conditioning the perfect sphere intercept and setting a camera near and camera far to your situation with a large earth (5000) and I could not see any truncated atmosphere (it changed basically nothing beyond the small cells in the atmosphere due to the sphere geometry). The white line we see is 100% the small gap between the perfect sphere and the displaced geometry, the rays stop early in the atmosphere creating our issue.

I started checking your fiddle and the depth is coherent with the computations of the shader. The screen to world transformation yields a point that is exactly as far as it should be given the depth. The rayDir is also correct.

If using the exact calculation gives such a different result, it must be that maximumDistance (computed using the depth) and the exact ray/sphere computation must be offby a lot. So what i did is multiply the maximumDistance by a small amount (see this fiddle https://jsfiddle.net/0rvbs1nx/10/). And it kinda works... until you move and it breaks.

So i'm not sure, but there must be something different about how Three handles depth, maybe in a non linear way (because BabylonJS has a special option for non linear depth values that is not enabled by default nor in my simulation), Its most definitely coming from the depth buffer.

I don't know much about ThreeJS sadly, but I feel the solution must be really close now.

Yincognyto commented 1 year ago

I just tested your log transform of height01 and I must say it works like a charm! [...] I didn't have much time to investigate issue a), maybe this weekend. The fiddle you shared will help a lot! I will keep you updated.

Glad to be able to help out, as much as I can! Regarding the hack and atmosphere color in relation to the size of the planet, Jupiter kind of supports that idea, since its color is made of shades of white, brown, yellow and red (which also appear when increasing the size of the atmosphere in the shader for a regular planet). Sure, Jupiter's atmosphere has a different chemical composition compared to Earth's, but if the former's color is also dependent on the size of the planet (due to greater lengths the rays have to pass through) then it would explain the effect. Anyway, I'm not a physicist or astronomer, so I'm not 100% sure.

One thing I tried though was commenting the if conditioning the perfect sphere intercept and setting a camera near and camera far to your situation with a large earth (5000) and I could not see any truncated atmosphere (it changed basically nothing beyond the small cells in the atmosphere due to the sphere geometry). [...] So i'm not sure, but there must be something different about how Three handles depth, maybe in a non linear way (because BabylonJS has a special option for non linear depth values that is not enabled by default nor in my simulation), Its most definitely coming from the depth buffer.

I don't know much about ThreeJS sadly, but I feel the solution must be really close now.

Yeah, now you understand why I had to use the functions I used in the previous version of the shader, precisely to linearize and manipulate the result so that it matches BabylonJS' one and yield the same outcome. I don't want to bother you with ThreeJS' quirks if the outcome is accurate on BabylonJS regardless of camera's near and far, planet's radius, or whether there is terrain on the planet or not (which is, and should be, the whole point of this topic), so I won't push on that direction. Unfortunately, just like you with ThreeJS, I'm not that much of an expert in BabylonJS (or the inner workings of your shader yet, for that matter) to be able to investigate the results in both environments properly and come up with a solution that works in both the same way.

That being said, on the practical side, this is what we know, when it comes to BabylonJs and ThreeJS and the issues:

So, I share your opinion that it's close to a solution. Regarding the depth, world position and how they influence the outcome, it has to do with how linearizing the depth affects computations, and parts of the code in the new version of the shader. Basically, some parts work with the original depth taken from the texture, others need the linear depth. For example, assuming the packing shader chunk is imported at the top of the shader via #include <packing>, this is how depth is linearized in ThreeJS (these are the functions I used in the old version of the shader, the new one doesn't need them bar the terrain):

float logarithmicToStandardDepth(float logDepth, float near, float far)
{
  return pow(2.0, logDepth * log2(far + 1.0)) - 1.0;
}
float standardToPerspectiveDepth(float stdDepth, float near, float far)
{
  return far / (far - near) + far * near / (near - far) / stdDepth;
}
// get the depth at pixel coordinates from the depth texture (https://github.com/mrdoob/three.js/issues/23072)
float linearDepth(sampler2D deptharray, vec2 coordinates, float near, float far)
{
  float depth = texture2D(deptharray, coordinates).x;
  #if defined(USE_LOGDEPTHBUF) && defined(USE_LOGDEPTHBUF_EXT)
    depth = standardToPerspectiveDepth(logarithmicToStandardDepth(depth, near, far), near, far);
  #endif
  return viewZToOrthographicDepth(perspectiveDepthToViewZ(depth, near, far), near, far);
}

Now, something like this happens when defining depth at the beginning of the main() part of the shader:

  // Works with big radius and extreme camera near and far, doesn't work with terrain, but truncates if the DO NOT USE part is commented
  float depth = texture2D(depthSampler, vUV).r; // the depth corresponding to the pixel in the depth map
  // Doesn't work with big radius and extreme camera near and far, but works with terrain if the DO NOT USE part is commented
  float depth = linearDepth(depthSampler, vUV, cameraNear, cameraFar); // the depth corresponding to the pixel in the depth map

Switching between these two ways and big / small radius illustrates the effects. I'll keep looking for a solution.

BarthPaleologue commented 1 year ago

I know the issue with the planet radius was also a thing in the previous version, the changes did not impact it.

The functions you use to unpack the depth are quite weird to me, but that's ThreeJS so maybe it makes sense. If I understand correctly, the last function is returning an orthographic depth and I'm not sure if this is what we want. In BabylonJS i only deal with the perspective depth so it might be part of the issue.

The standard to perspective is also weird to me as if it were a remap between the range [0,1] to [cameraNear, cameraFar], it would be more like cameraNear + depth01 * (cameraNear - cameraFar).

Yincognyto commented 1 year ago

I know the issue with the planet radius was also a thing in the previous version, the changes did not impact it.

Ah, I see - thanks for letting me know.

The functions you use to unpack the depth are quite weird to me, but that's ThreeJS so maybe it makes sense. If I understand correctly, the last function is returning an orthographic depth and I'm not sure if this is what we want. In BabylonJS i only deal with the perspective depth so it might be part of the issue.

The standard to perspective is also weird to me as if it were a remap between the range [0,1] to [cameraNear, cameraFar], it would be more like cameraNear + depth01 * (cameraNear - cameraFar).

Yes, they are indeed. I agree with what you said, and so far it has been confirmed in practice as well: I tried using those after I started yesterday to follow Sebastien Lague's video step by step, and they didn't work the way I expected to. Basically, to get what you named the maximum distance I had to use - perspectiveDepthToViewZ(texture2D(depthSampler, vUV).x, cameraNear, cameraFar) in Three.js, which is more or less the equivalent of length(pixelWorldPosition - cameraPosition) or distance(pixelWorldPosition - cameraPosition) in GLSL. Obviously, one of the latter two are the way to go since they work irrespective of the library one uses.

After a good part of today when JSFiddle was down where I live, I'll continue trying to follow Lague's code from his video tomorrow and see if it can offer some clues on how to solve some of the issues that still exist in my ThreeJS implementation (even though the video is not without its mistakes either). I will keep you posted on what I can find but if your BabylonJS implementation is now working as it should in all cases, then you don't need to worry about these things too much. Who knows, maybe I'll understand why and if the radius issue happens or discover some things that might help your project too.

BarthPaleologue commented 1 year ago

I'm sure you will find a solution :+1: I will keep this issue opened until then. The explanation of why it doesn't work will be really interesting for sure. Thank you for your help again ;)

Yincognyto commented 1 year ago

I'm sure you will find a solution 👍 I will keep this issue opened until then. The explanation of why it doesn't work will be really interesting for sure. Thank you for your help again ;)

Just a small heads up, since I didn't replied to you right away. I understood Lague's video quite quickly once I started to do every step he did (when JSFiddle came back online the next day), and understood the principles behind the whole thing easily - I guess I was put off by the "scary formulas" and notations before, when I read through some other papers or looked through some other codes.

A brief summary of what I found is (the practical solutions to come up the following days, once I set them all up properly):

That's all for now, I'll get back to you with some code later on. Thanks for keeping the issue open for me even if you didn't have to and for all your help that made me better understand the rest while following Lague's tutorial. Hopefully my following reply will conclude things in a satisfactory manner so you can close this as it would be normal. ;)

BarthPaleologue commented 1 year ago

I'm glad you fixed it! I don't understand why ThreeJS needs the orthographic depth though, I wouldn't have guessed haha.

Regarding the direction to the sun, I use the approximation that the sun is so far away we can consider its rays parallel. It allows for a small optimization now and potentially a huge optimization down the line with a LUT. Technically, there is some error with this reasoning, but it is so small in practice you won't notice it unless you put your sun very close to the planet.

I guess normalizing the optical depth might do the trick indeed for the color. I have worked out the mathematical relation between the length of the ray and the radius of the atmosphere, I will get back to you when I find where my draft went. I think it was linear though.

You are welcome, I'm glad you could make things work in a satisfactory way. I'm sure the solution will help plenty in the future :+1:

Yincognyto commented 1 year ago

I'm glad you fixed it! I don't understand why ThreeJS needs the orthographic depth though, I wouldn't have guessed haha.

I don't know for sure, but maybe it has to do with the fact that a post processing shader is orthographic in nature being a planar layer applied over the underlying perspective view, that each depth point represents an orthogonal (i.e. straight line, not accounting for perspective) representation of the original perspective depth, or that you don't seem to perform perspective division anymore in the new version of worldFromUV() function like in the old one, hence needing the orthographic version of it. I'm not an expert here though, so these are just guesses - the important thing is that orthographic depth suits the present code.

Regarding depth, here is the code I used for testing depths. It has a bunch of different versions for each function (with a "version marker" A, B, C, and so on at the end of their name) and their URL source so it's a bit extensive, but it's easy to test just by deleting the version marker from the desired function and adding the corresponding one back to the skipped one. You can clearly see there that if you use rawDepth (your shader method) instead of ortDepth (my fixed method) at line 225 it won't account for the terrain on the planet meaning that the used depth value is off, and if you use linDepth (my unfixed method) and set the 7 from the far variable to 8 it won't draw the atmospheric layer around the planet meaning that the unfixed formula dealing with the logarithmic depth buffer was off for extreme camera values.

Regarding the direction to the sun, I use the approximation that the sun is so far away we can consider its rays parallel. It allows for a small optimization now and potentially a huge optimization down the line with a LUT. Technically, there is some error with this reasoning, but it is so small in practice you won't notice it unless you put your sun very close to the planet.

I understand your reasoning and it's logical, though I don't know about an optimization now. What I meant - at least partially - is that you use the planet center to compute the sun direction, instead of the initial sample point (aka the ray origin, which is already calculated, so I don't see where the said optimization is at this stage, albeit it will surely happen later on, as you said). For me, the effect was obvious: brighter colors for the atmospheric "aura" at the planet margin, even when the light falls on the planet from the observer position, i.e. the margin is between day and night so less light into the atmosphere there. At first, I was concerned on why my shader version was producing paler colors there since obviously yours was looking better in that regard, but then I found out the reason and realized it was more realistic this way. Spinning the planet so the light position changed produced the brighter colors I wanted to replicate, of course, but only when they should have happened according to the moment of day and light refraction. That being said, you're right that computing the angle (and everything else, for that matter) at each of those 10 points along the ray doesn't seem to affect the result, but I was not referring only to that earlier, like I mentioned above.

I guess normalizing the optical depth might do the trick indeed for the color. I have worked out the mathematical relation between the length of the ray and the radius of the atmosphere, I will get back to you when I find where my draft went. I think it was linear though.

Actually, I think I have a better solution (in a way), which I strangely didn't thought of earlier: dividing the scattering strength to the planet radius that you take as a reference (50 in this case, apparently). This produces the same result whatever the radius and doesn't involve using hacks or normalizations in the shader. The only drawback as far as I can tell would be that then the radius and the scattering strength would be linked and modifying the former would change the latter as well, which doesn't quite suit your project entirely. Even so, making the whole thing (whether it's done from the scattering strength setting outside the shader, or through normalization of either the scattering strength or the optical depth in the shader) optional - just in case the behavior is realistic when it comes to actual astronomy - would be a good choice. Then, the user would be able to use the normalization when he wants to change the overall scale of things in the scene (like going from meters to km as a measurement unit and such), or not using it if by any chance the behavior is natural.

You are welcome, I'm glad you could make things work in a satisfactory way. I'm sure the solution will help plenty in the future 👍

Yes, indeed - both in the present and in the future. 👍 You can check the basic shader prototype here if you want (sorry for the variable naming, sometimes I look to much for "symmetry" and order, and those equal signs one under another captivated me in those moments, even though I had a hard time finding a word for a solid astronomical body made up of 3 letters so I just used "geo", haha) - it's reasonably simple and correct from a physical perspective, I think. I guess it can work in BabylonJS as well, as long as there is some "switch" to use rawDepth instead of ortDepth at line 141 - this and the 4 functions after remap() are pretty much the difference between a ThreeJS and a BabylonJS usage.

If you want to, you can close this issue at any time you like, since what was to correct for your project happened already - the rest are just minor considerations on what would be better and they are neither urgent nor critical. I'm glad if my feedback helped your implementation too, not just mine. I'll certainly make sure that proper credit and everything is present in my eventual work (when and if I'll release it), and it has been a pleasure to have such an open, pleasant and productive discussion with you - it has been a nice learning experience! Thanks for everything! 👍

BarthPaleologue commented 1 year ago

Needing the orthographic projection in ThreeJS is quite an important takeaway indeed. I did not realize that not performing the perspective division meant that I was also doing it without knowing it haha.

Regarding the optimization it is quite small for now as we don't need to recompute the direction toward the sun everytime. We could also use the initial sample point without any issue indeed. As the rays are mostly parallel I don't think we can notice a lot of differences.

I will try the division of the scattering strength by the radius, it sounds promising. If it looks good I might make it the default option in the shader. Anyone can change it after all.

Your shader prototype looks already really usable, you can make a pull request to the repo if you want to add it side by side with the BabylonJS implementation ;)

I think I will close this issue now as everything works as intended! It has been a pleasure to make both of our works better in such nice way :)

Thank you, I wish you all the best :+1:

Yincognyto commented 1 year ago

I will try the division of the scattering strength by the radius, it sounds promising. If it looks good I might make it the default option in the shader. Anyone can change it after all.

Yep, it's interesting that your implementation needs 1063 for the better scale invariance and mine works with the probably more logical 400. Must be a consequence of the slight differences between them, because otherwise it's the same operation.

Your shader prototype looks already really usable, you can make a pull request to the repo if you want to add it side by side with the BabylonJS implementation ;)

Thanks for the nice and generous offer, but in essence it's more or less the same idea, just written a bit differently. I shared my prototype here so you can freely use it or any parts of it in case you want to make a ThreeJS compatible version, so no need for a PR on that, just use it as you see fit, if that's the case. ;)


In the meantime, here is how you draw the atmosphere even if the background is transparent, as it can be the case in my project, where the black space void with stars can be toggled on or off (the code below assumes originalColor is passed as a vector4 along with its alpha in scatter(), so the final alpha is the max between the alpha of the original color and the RGB of light when it comes to premultiplied alpha - this will work seamlessly whether the background is transparent or not so it can be applied with minimal changes even to your shader, as it can be seen in my movable "skin" on top of one of my wallpapers below):

  float lightAlpha = max(light.r, max(light.g, light.b));
  return vec4(originalColor.rgb * (1.0 - light) + light, max(originalColor.a, lightAlpha));

Aplha

P.S. Finally realized the purpose of your last if in working around the perfect sphere tested for intersections vs the sphere geometry made of polygons, haha! Nice method, now I understand why it might conflict with having terrain on the planet. It seems that even though intersections with various regular shapes (like ellipsoids here, might be of use someday) are easy to compute, the intersections with a polyhedron are not that easy - fortunately having a larger number of "segments" for the sphere alleviates this to very thin discrepancies at the horizon when the camera is very close to the planet.

BarthPaleologue commented 1 year ago

Hello again ;)

The 1063 is actually is side effect of getting rid of the reference radius (it was 50 and I made it one by multiplying the 400 by the 4th square of 50 making it 1063).

Thanks for the nice and generous offer, but in essence it's more or less the same idea, just written a bit differently. I shared my prototype here so you can freely use it or any parts of it in case you want to make a ThreeJS compatible version, so no need for a PR on that, just use it as you see fit, if that's the case. ;)

Alright I will probably add it at some point in the future, thank you for your very complete prototype again ^^

In the meantime, here is how you draw the atmosphere even if the background is transparent, as it can be the case in my project, where the black space void with stars can be toggled on or off (the code below assumes originalColor is passed as a vector4 along with its alpha in scatter(), so the final alpha is the max between the alpha of the original color and the RGB of light when it comes to premultiplied alpha - this will work seamlessly whether the background is transparent or not so it can be applied with minimal changes even to your shader, as it can be seen in my movable "skin" on top of one of my wallpapers below):

This is a nice addition, I will add it to the shader. It's always nice to support more usecases at a very low cost :+1:

Finally realized the purpose of your last if in working around the perfect sphere tested for intersections vs the sphere geometry made of polygons, haha! Nice method, now I understand why it might conflict with having terrain on the planet. It seems that even though intersections with various regular shapes (like ellipsoids here, might be of use someday) are easy to compute, the intersections with a polyhedron are not that easy - fortunately having a larger number of "segments" for the sphere alleviates this to very thin discrepancies at the horizon when the camera is very close to the planet.

Indeed arbitrary geometry intersection is not feasible in the same way as the sphere or simple other convex shapes haha. Or maybe you would have to raymarch the terrain in the shader as well which would be very complicated. Increasing the resolution is definitely the way to go, but if you don't have other geometry on your scene the performances should be alright. Something that also works is using a quadtree to subdivide the sphere when you move closer to the ground so you don't tank your performances when it is not needed. example

Yincognyto commented 1 year ago

Yeah, I should have seen the logic behind the 1063 value, my bad. Making the reference radius 1 should be the way to go indeed as it's more natural, yet it's curious that a process based on actual physics formulas doesn't produce the "good" result at a radius precisely that of Earth (whether in km or m) and instead needs a more or less arbitrary reference like the 50 value (normalized to 1 or not). Maybe those formulas contain a simplification that is made to suit the values we often use as programmers for our scenes, I don't know. It's not wrong or unconvenient, obviously, it's just a bit ... strange.

Raymarching the terrain seems not only complicated, but quite intensive. I'm aware of folks who subdivide stuff to manage performnce in such cases, but somehow it didn't seem necessary for my use case, despite the warning. I wonder if using a dynamic camera far value instead wouldn't produce the same result in a simpler way, since things past camera far shouldn't be drawn anyway. In any case, I'm not there yet, so I'll take each challenge when - and if - it comes. Currently, I was thinking how would - hypothetically - a multiple atmospheric planet group be displayed. I mean, those for-s in the GPU shader (which I otherwise like) already yield a perfomance hit, adding another for to handle the intersections with other atmosphere spheres (and all the light computing that comes with them) would probably be overkill...

EDIT: On a second thought, I guess the light computing would stay the same since it's still the same viewport, with the same number of pixels / fragments in it. And besides the for, solving a quadratic equation shouldn't weigh that much, even for the GPU.

BarthPaleologue commented 1 year ago

The 400 and the 1063 are not based on physics so it is normal we don't see realistic results when plugging the values of the earth naturally. They are a trick to bring the wavelength settings in a usable range for the user. In nature, the wavelength is in the order of 1e-9 because visible light has a wavelength between 400nm and 800nm. There are many formulas that are more accurate on the wiki page. But they are more expensive to compute while they do not add a lot visually.

I wonder if using a dynamic camera far value instead wouldn't produce the same result in a simpler way, since things past camera far shouldn't be drawn anyway.

This could work, but there will be many edge cases and you will probably compute still more geometry than needed. It might not be a problem depending on the scale of your project.

Currently, I was thinking how would - hypothetically - a multiple atmospheric planet group be displayed. I mean, those for-s in the GPU shader (which I otherwise like) already yield a perfomance hit, adding another for to handle the intersections with other atmosphere spheres (and all the light computing that comes with them) would probably be overkill...

I am also struggling with this issue with my own project. One thing that i believe could be done is to disable the postprocess altogether when the planet is not in the frustrum so no ray/sphere intersection would be needed, just a simple AABB to frutrum test. The other option is making the shader faster with a LUT (this again xD), but as far as I managed to push it, the LUT produces a decrease in visual quality. I think the first solution should already be an improvement.

Yincognyto commented 1 year ago

The 400 and the 1063 are not based on physics so it is normal we don't see realistic results when plugging the values of the earth naturally. They are a trick to bring the wavelength settings in a usable range for the user.

You're right, somehow I missed that. Out of curiosity, I tried using 1 instead of 400 or 1063 in the shader and realistic values for the radius and the wavelengths with meter as the unit, and it definitely worked, albeit for extreme scattering strength values (e.g. radius at 6.371E6, strength at 15.0E-32 or via inverse proportionality, radius at 6.371E0, strength at 15.0E-26).

I am also struggling with this issue with my own project. One thing that i believe could be done is to disable the postprocess altogether when the planet is not in the frustrum so no ray/sphere intersection would be needed, just a simple AABB to frutrum test. The other option is making the shader faster with a LUT (this again xD), but as far as I managed to push it, the LUT produces a decrease in visual quality. I think the first solution should already be an improvement.

Disabling postprocessing shouldn't be a problem in my project, I already have everything able to be toggled on or off, switched to a different variant, etc, but I'd like to keep posprocessing enabled when it's supposed to be enabled, even if there isn't an atmospheric planet in frustum (e.g. maybe I need some other potential postprocessing effects to be displayed).

Passing the relevant uniforms (i.e. position, radiuses, other specific properties) as arrays in the shader based on some simplified in-frustum test in JS like you described could possibly help here - the size of the array would be the number of atmospheric planets in the frustum, the values would be their properties, so if no such planet is in the frustum the for (int i = 1; i < NumOfAtmPlanetsInFrustum; i++) {...}; would not run at all since the upper limit of the for would be less than 1 anyway. That being said, this will not improve performance in any way if by any chance you have multiple such planets with an atmosphere already in the frustum. Think of displaying both of the aligned Earth and Jupiter in the shader - fortunately the space is so big that such occurrences are very rare, unless zooming out to include more such planets in the frustum.


Another topic I was thinking of is the way to "transform" the frames of a playing video into volumetric representations. Right now I use a customized NASA grayscale video of 25 days of "weather" playing in a loop on a spherical mesh to display the clouds for the planet, by playing the video in sync with the rest of the movements in the scene (e.g. Earth rotation, whether the scene is animated or paused, etc.; below is the Earth rotated 360 degrees per frame to easily notice cloud movement).

https://user-images.githubusercontent.com/10037398/222977329-353b863e-dcee-4a96-a477-ada7784fc2bf.mp4

It works great and is realistic since everything is based on accurate time transformations (e.g. one day of "weather" in the video is translated to exactly one day of Earth clouds in the scene), but it would be even nicer if it could be integrated as a volumetric representation in the atmospheric / postprocessing shader. :)

BarthPaleologue commented 1 year ago

You're right, somehow I missed that. Out of curiosity, I tried using 1 instead of 400 or 1063 in the shader and realistic values for the radius and the wavelengths with meter as the unit, and it definitely worked, albeit for extreme scattering strength values (e.g. radius at 6.371E6, strength at 15.0E-32 or via inverse proportionality, radius at 6.371E0, strength at 15.0E-26).

It is reassuring that the model holds true when tested against realistic values ^^

Disabling postprocessing shouldn't be a problem in my project, I already have everything able to be toggled on or off, switched to a different variant, etc, but I'd like to keep posprocessing enabled when it's supposed to be enabled, even if there isn't an atmospheric planet in frustum (e.g. maybe I need some other potential postprocessing effects to be displayed).

Maybe you can disable only the atmosphere postprocess, but maybe you made everything in one shader for optimization?

Passing the relevant uniforms (i.e. position, radiuses, other specific properties) as arrays in the shader based on some simplified in-frustum test in JS like you described could possibly help here - the size of the array would be the number of atmospheric planets in the frustum, the values would be their properties, so if no such planet is in the frustum the for (int i = 1; i < NumOfAtmPlanetsInFrustum; i++) {...}; would not run at all since the upper limit of the for would be less than 1 anyway. That being said, this will not improve performance in any way if by any chance you have multiple such planets with an atmosphere already in the frustum. Think of displaying both of the aligned Earth and Jupiter in the shader - fortunately the space is so big that such occurrences are very rare, unless zooming out to include more such planets in the frustum.

I think the loop should start at 0, but you are right you will probably not have multiple atmospheric planets in the frustrum at the same time so the optimization might not be very noticable.

Another topic I was thinking of is the way to "transform" the frames of a playing video into volumetric representations. Right now I use a customized NASA grayscale video of 25 days of "weather" playing in a loop on a spherical mesh to display the clouds for the planet, by playing the video in sync with the rest of the movements in the scene (e.g. Earth rotation, whether the scene is animated or paused, etc.; below is the Earth rotated 360 degrees per frame to easily notice cloud movement).

I must say the motion of the clouds are much better in this video indeed haha. If you want to display the clouds on the screen then you need a video which only shows the clouds, does Nasa provide this? Because i'm not sure how you could extract the clouds from the underlying earth or maybe i'm not understanding correctly. Is the nasa video a projection of the sphere of clouds that you can use as a texture ? If so that's awesome :) Do you have a link for this? I searched the Nasa website and I couldn't find something like what you mention.

I guess it can be added in the shader but I think it would work better if it was separate. You would run the cloud shader before the atmosphere shader and obtain the same result, with the advantage of it being readable with a very low performance cost I think.

Basically the cloud shader is simply intersecting the view ray with a sphere which radius is a little greater than the actual planet and then mapping the cloud data onto that sphere I guess.

Yincognyto commented 1 year ago

Maybe you can disable only the atmosphere postprocess, but maybe you made everything in one shader for optimization?

Well, right now I only have the atmosphere shader as a "shader pass" in the postprocessing "composer" (as per ThreeJS' terminology) so disabling postprocessing automatically means disabling the volumetric atmosphere, but yeah, in terms of future development, I'd prefer to include everything that is postprocessing / volumetric in one shader for optimization. Frankly, I would include everything in the scene in one shader if I could, and avoid geometries and such altogether, to make it one draw call globally, but of course that's just wishful thinking since even if it was possible it would complicate stuff as much as it would optimize them. :)

I must say the motion of the clouds are much better in this video indeed haha. If you want to display the clouds on the screen then you need a video which only shows the clouds, does Nasa provide this? Because i'm not sure how you could extract the clouds from the underlying earth or maybe i'm not understanding correctly. Is the nasa video a projection of the sphere of clouds that you can use as a texture ? If so that's awesome :) Do you have a link for this? I searched the Nasa website and I couldn't find something like what you mention.

Yeah, I went through a lot of questions you are wondering right now, haha! I knew precisely what I was after, but it took a long time to find what I wanted, and I found it in a place I would have never thought to look for. I found it out of sheer luck and some proper googling keywords, after previously looking where it "should" have been to no avail.

Now, some images or composite videos are not hard to find, see here, here, or here - even on YouTube like here (they go by the name "weather" or "cloud fraction"). As you can imagine, none of these are entirely what is needed, first because the images taken by satellites have blanks inherent to space exploration and their orbit around Earth, and secondly because the composite videos are actually a blend of the Earth map and the weather / cloud fraction and not the cloud fraction alone. Finally, after much searching in the "right" places, I found what I wanted where I never imagined to be, see here (notice the title of the page and how the terminology now calls the thing "cloud cover").

So, it's a 2D video projection of the clouds that can be used as a (video) texture. This can be used as a transparent "alpha map" placed on a sphere geometry slightly larger than Earth, since black will be made transparent and white will be opaque, with the rest of the variations between.

The thing is that for my needs, the adventure didn't stop here. I wanted to play the video in a loop based on time, so I had to use a tool like FantaMorph or similar to transition / replace the extracted frame images towards the end of the video and match some of the frames in the beginning for smooth videoend-videostart loop transition when playing. Then, since I wanted to be able to play both forwards and backwards at will and JS in browsers has some limitations on speed rate and direction of playing, I had to simulate playing by seeking in the video according to its FPS, currentTime, duration and the correlation between the video time and the desired time in the Earth representation. Then, to do that smoothly, I had to recode the video so that each frame is a keyframe (ffmpeg can do this, for example) and upload it completely on the page via an XMLHttpRequest in JS before I could use it freely (I already preload resources in the page and the resulting video is just 20 MB, so this was integrated there). Then, I found out that I had to watch for a specific event (i.e. video.oncanplay) before I could begin (simulating) playing, otherwise the rest of the scene went ahead of the video because the latter wasn't yet ready.

So yeah, it was a bit more than downloading the video and projecting it on a sphere for me, but the result is well worth the effort. The granularity isn't that great at just 1 frame per hour (i.e. 600 frames per 25 days) so for slower Earth rotation speeds the clouds will not transition smoothly at real speed, but the result is still better that NASA's own here. Plus, if you don't care to be as realistic and precise as I do, you can forget about the additional work and just play the video at the JS / browser allowed rates - it will result in a faster cloud speed compared to what it "should" be, but the playing will be perfectly smooth (I have this option as an alternative in my project, where I can switch between "play" and "seek" modes when configuring stuff):

https://user-images.githubusercontent.com/10037398/224512786-650a6543-ba9a-4dfb-95a7-215dfe3a1514.mp4

BarthPaleologue commented 1 year ago

Now, some images or composite videos are not hard to find, see here, here, or here - even on YouTube like here (they go by the name "weather" or "cloud fraction"). As you can imagine, none of these are entirely what is needed, first because the images taken by satellites have blanks inherent to space exploration and their orbit around Earth, and secondly because the composite videos are actually a blend of the Earth map and the weather / cloud fraction and not the cloud fraction alone. Finally, after much searching in the "right" places, I found what I wanted where I never imagined to be, see here (notice the title of the page and how the terminology now calls the thing "cloud cover").

I never knew such a thing existed before i'm really impressed. You are just one loop away from cheap and accurate clouds ^^

Then, to do that smoothly, I had to recode the video so that each frame is a keyframe (ffmpeg can do this, for example) and upload it completely on the page via an XMLHttpRequest in JS before I could use it freely (I already preload resources in the page and the resulting video is just 20 MB, so this was integrated there).

You really went all out on this haha that's awesome!

but the result is still better that NASA's own here.

The side by side comparison sure is not flattering for Nasa oops xD

Then I must say the loop is quite strange but it's unavoidable of course given the atmospheric chaos. Maybe it is possible to make a perfect loop by simulating cloud masses using noise to render a video. But I think our safest bet right now is to wait for AI video generation to catch up and we might be able to create such videos without much effort (who knows ^^)

Yincognyto commented 1 year ago

You really went all out on this haha that's awesome!

That's just the clouds chapter, I also have vegetation / ice caps cycling with seasons, the planet wobbling around its axis according to the equation of time, formulas adjusted for long term Milankovitch cycles, etc. The first two can actually be seen in the fast forward video preview I posted earlier. :)

Then I must say the loop is quite strange but it's unavoidable of course given the atmospheric chaos. Maybe it is possible to make a perfect loop by simulating cloud masses using noise to render a video. But I think our safest bet right now is to wait for AI video generation to catch up and we might be able to create such videos without much effort (who knows ^^)

Well, weather is a process without (perceived) beginning or end, where the position, intensity and movement of clouds is a consequence of partly logical and partly random events in the recent past. A true simulation or a perfect loop would have to deal with an enormous amount of data (most of it unknown to us even in the 21st century) to predict future evolution, so, like you said, a video loop of roughly one month of weather is a feasible and acceptable compromise. It would be even nicer if it was "converted" to a volumetric representation, something like this or this.

Ideally, a whole year of cloud movement should have been used, but that would have resulted in a much larger video (and effort) corresponding to 365 x 24 aka 8760 frames, or 292 seconds at 30 FPS, so rougly 5 minutes of video ... with all of its frames being keyframes (which would increase its size accordingly). That would weigh too much on performance and requires processing a lot of images to remove the blanks I mentioned earlier, so...


Going back to the atmosphere shader, another thing I'm trying to find out now is the value of the parameters needed for a more opaque image of the sky when on the ground and looking upwards (so as to hide the stars mesh behind it when it's daylight), yet reasonabliy fading on higher altitudes (so as to not turn it into a hard edge around the planet when in space), in the case of the realistic 0.02 ratio to a planet radius of 6371 like for Earth.

Before, the default "15" values along with the scattering strength equal to 50 / radius looked correct from outer space, but it turns out that, in order for the sky color when looking from the ground up to not be too faint or dark, a falloff factor of 10, sun intensity of 15 and scattering strength of 150 / radius seems more appropriate (at least in my implementation). It results in a lighter and a bit more "greenish" result from outer space, but it makes the sky have a color closer to reality from the ground. It doesn't "fade away" the stars mesh behind it as I intended, though maybe I should just make stars less bright instead.