Closed 10110111 closed 2 years ago
I have not published my timelapses, but could dig them out and place them somewhere for you within a few days.
OK, I had forgotten how resource hungry this still is. Then it should be a configurable option that should be run only on capable computers. (Only dedicated NVidia/AMD with enough VRAM? Or would a Core i5-4xxx be enough? Cuda required? Or any minimum OpenGL version limitation?)
This twilight with earth shadow is great! I hope you still can report luminance etc. Maybe even adapt to ground reflectivity (added to landscape.ini files). It's OK if "precomputation to new conditions" takes up to a minute or so. Would it be useful to cache results (a few GB of storage are easily found...)? Or even add a whole tab on "Atmospheric conditions" with options to save/delete/recreate/...?
Then it should be a configurable option that should be run only on capable computers. (Only dedicated NVidia/AMD with enough VRAM? Or would a Core i5-4xxx be enough? Cuda required? Or any minimum OpenGL version limitation?)
At least a Haswell-based Intel GPU easily copes with it (taking some time of course, but works). I suppose we shouldn't try to guess whether it'll work. May be better warn the user that it may make your GPU hang if the hardware/drivers are not good enough, add a mandatory checkbox like "I understand and still want to run the computation" to enable the "Proceed" button.
I hope you still can report luminance etc.
This would be good, but... the whole rendering is in a fragment shader. If we try taking the results from the render surface, we'll get them after xyYToRGB
stage. I'm not sure how to extract the needed values in another way than by rendering yet another surface (maybe of a smaller size) with a whole upper hemisphere, avoiding the eye adaptation simulation, in XYZ space, and then averaging it to get mean luminance.
It'd actually be good if you could tell me exactly what data Stellarium needs (average luminance of the sky, or illuminance of the surface at the point under camera, or something else. Also, what if the camera is at high altitude?), and which modules make use of it — so that I could 1) implement it, 2) verify correctness.
Would it be useful to cache results (a few GB of storage are easily found...)? Or even add a whole tab on "Atmospheric conditions" with options to save/delete/recreate/...?
Given the fact that the parameters are actually analog (i.e. not simply a few small integers/booleans), it seems cache would be not too useful. But some presets would be good to have, as would an ability to save custom ones.
A few technical parameters would be also needed I suppose, like number of scattering orders to take into account, because what if a user chooses such settings that correspond to overcast sky? ;) 4 orders would be too few in that case.
And for presets: as the precomputed data take quite a bit of space, and they are binary, is it a good idea to store them in the Stellarium repository?
Actually, it appears that illuminance of a horizontal surface is trivial to obtain without a single GPU operation: just take four values from the illuminance texture and interpolate between them according to current altitude and Sun zenith angle. (See PAS docs.)
Sky luminance, OTOH, does require drawing, since it's different in different directions, and we don't want to do per-pixel operations on the CPU.
At least a Haswell-based Intel GPU easily copes with it (taking some time of course, but works). I suppose we shouldn't try to guess whether it'll work. May be better warn the user that it may make your GPU hang if the hardware/drivers are not good enough, add a mandatory checkbox like "I understand and still want to run the computation" to enable the "Proceed" button.
Yes, that's fine.
I hope you still can report luminance etc.
This would be good, but... the whole rendering is in a fragment shader. If we try taking the results from the render surface, we'll get them after
xyYToRGB
stage. I'm not sure how to extract the needed values in another way than by rendering yet another surface (maybe of a smaller size) with a whole upper hemisphere, avoiding the eye adaptation simulation, in XYZ space, and then averaging it to get mean luminance.
An ideal case would be to have a small Y texture to sample when needed, so I'd be even after a localized value. I know getting texture or data back is regarded as bottleneck. But some interesting applications I had brainstormed once would include a "virtual photometer" (or, well, indeed even a virtual XYZ color meter, but luminance alone would be fine.)
It'd actually be good if you could tell me exactly what data Stellarium needs (average luminance of the sky, or illuminance of the surface at the point under camera, or something else. Also, what if the camera is at high altitude?), and which modules make use of it — so that I could 1) implement it, 2) verify correctness.
Currently there is the average luminance which is IIRC used for tonemapping ("image brightness"), but I should look into the code again what else we currently have. The Preetham model allows easy (CPU) computation of luminance for each point on the hemisphere, which made me think of that virtual photometer (to be done...) that just reports values for the mouse location. Of course, a physically correct (or at least, a physics-based) simulation could also include the adverse effects of light pollution, etc, if that could be done in a quantitative fashion, it would open up new fields of real-world application. There are many fields of application for Stellarium in scientific, educational or environmental simulations.
Would it be useful to cache results (a few GB of storage are easily found...)? Or even add a whole tab on "Atmospheric conditions" with options to save/delete/recreate/...?
Given the fact that the parameters are actually analog (i.e. not simply a few small integers/booleans), it seems cache would be not too useful. But some presets would be good to have, as would an ability to save custom ones.
I had thought of saving out the textures, to be simply reloaded instead of being recomputed.
A few technical parameters would be also needed I suppose, like number of scattering orders to take into account, because what if a user chooses such settings that correspond to overcast sky? ;) 4 orders would be too few in that case.
I assume "overcast skies" are of no great interest for most of our users. Or do you also simulate clouds? Thin haze could be interesting, as well as effects of higher aerosol content in sea environments, dusty deserts, or dry mountain scenes where ozone becomes more important. Years ago I have read papers that discussed double scattering as significant step forward. Probably make it configurable as 1..5, and most users will settle at 2?
And for presets: as the precomputed data take quite a bit of space, and they are binary, is it a good idea to store them in the Stellarium repository?
Not sure what you mean? I had assumed those data are created by the simulation? Therefore my local caching idea. Would you need an online repository, or would you need to pack a few megabytes of data to the installer? Or should we provide it as optional download?
Thinking again about the original questions on PNG vs BMP (#623): Is the "minute of GPU work" for a single frame (Date&Time), or a precomputation for a whole "atmosphere state" with any solar/lunar altitude?
Currently there is the average luminance
Average i.e. simply uniformly weighted over the upper hemisphere? Or maybe also including ground? And in any case, shouldn't it be proportional to illuminance of the ground (for which things are trivial, per my previous comment)?
Or do you also simulate clouds?
Of course no, not in general at least :) To simulate non-spherically-symmetric atmosphere we'd need much higher-dimensional textures (or employ additional techniques to render clouds). But if the user happens to choose a vertical distribution of Mie scatterers such that it'd correspond to a very uniform cloud layer (regardless of its thickness), I think PAS framework can cope with it (given enough scattering orders).
Years ago I have read papers that discussed double scattering as significant step forward. Probably make it configurable as 1..5, and most users will settle at 2?
PAS uses 4 by default. 4th order is still quite visible if you try checking difference between 3-orders and 4-orders final image. And with thicker atmosphere more may be mandatory. So I assume 1..5 is too limiting. I don't think there's any reason to limit this too much. Maybe 100, or even 1000 as upper limit. It should be on the user to figure out whether this is worth waiting for :)
I had assumed those data are created by the simulation?
Yes, and users on mobile devices (even notebooks/netbooks) won't want to drain their battery to generate the data (and some devices may even be incapable of generating the data — e.g. due to small VRAM). There have to be several presets to make Stellarium usable with nice graphics by default. At least one should exist, two if we also want/can a preset for solar eclipse (which isn't simply reduced illumination like with current model, see this video; I still have to check whether it'd be doable with PAS though).
Thinking again about the original questions on PNG vs BMP
PNG vs BMP makes a difference in the time to save a screenshot (i.e. compress frame to store on disk), not to render anything on the screen, so I'm not sure how the following point is related to this.
Is the "minute of GPU work" for a single frame (Date&Time), or a precomputation for a whole "atmosphere state" with any solar/lunar altitude?
The tens of seconds GPU work is the precomputation for the whole texture with which we can support all the camera altitudes, solar elevations and directions of view. But do note that it's a minute on a not-too-old GPU. On weaker GPUs this can take considerably more time (and mobile devices do usually have weaker GPUs, which adds to battery draining). And note also that GPU work which has a potential to lock up your desktop is not a good idea as a mandatory stage of first Stellarium launch.
solar/lunar altitude
Note though, that only one illuminant is assumed by the PAS renderer. We could supposedly do a multi-pass rendering to include all the point(ish) illuminants and accumulate the results to add e.g. scattered moonlight (and venuslight? ;D ).
Average: I'd have to look in the code again, but I think it's just luminance of visible vertices in the tessellated sky dome divided by their number. Therefore usually you get different tonemapping in skybox textures, and a recent override allows you to set this explicitly (to make skyboxes for game engines, or stitched panoramas etc.). Not sure about vertices covered by the ground? They may just have the mirrored luminance.
No problem with 1000 scattering runs if system allows :-)
Eclipse shadow: wonderful! I have own footage from 2017 and photos from earlier events. Yes the circular twilight is amazing.
About mobile devices and weaker systems: I think we should keep the current sky as default, but have your PAS implementation as switchable optional feature for those able to run it and wanting to see the difference. Or, what would be (approximately) the extra file package demand for one default PAS sky?
I am afraid we need Sun&Moon both active for conditions like "twilight with Moon". Venus would be nice, but just without Moon. I have been distracted by a low Venus in excellent skies which then set with a "blue flash", but Is there noticeable atmospheric brightening by Venus?
Or, what would be (approximately) the extra file package demand for one default PAS sky?
Currently 129MiB data seem to result in tolerable twilight details and dynamics.
Is there noticeable atmospheric brightening by Venus?
I don't think so, but I don't have any first-hand experience with this: in my place there's a lot of light pollution.
Hmm, 129MiB is quite heavy to pack in, but we could host it as optional download (like star catalog, or for manual configuration like DE431). However, @alex-w has more word on this.
OK, now I'm trying to do the brightness calculations using the new model. There's a difficulty though: in the current Atmosphere
, it's calculated using Schaefer's model (implemented in class Skybright
), which seems to take airglow into account (that's what bNightTerm
is responsible for, as I understood). And the new model is oblivious to such phenomena — it can only take Sun (and Moon as a second render pass) into account. No starlight (which seems to be taken as a constant 0.1 mcd/m² in current Atmosphere
), no zodiacal light (which seems to be ignored currently), no airglow — only daylight, twilight & moonlight.
So I wonder, how could I extract only the airglow and other night-time phenomena? Can I simply supply Sun & Moon in nadir to Skybright
to get sensible results? Or would it fail to take something relevant into account?
The branches atmosphere and atmosphere-wip looks broken O_o
Ah yes, Y is from the Schaefer model. Zodiacal light is extraterrestral, so Atmosphere should not give it (it is additive and extincted in low (angular) altitudes.) Schaefer's generic starlight background should probably be included though. (Else take the 150 brightest stars and scatter? But I think the benefit is not worth the effort. ) I don't know what to do about airglow if not simply taking Schaefer's values. It's an additive not modelled by the usual scatter-based skylight models, so running the scatter model for sun and moon in nadir would not find it. It is caused by physical processes (recombination etc) in the higher atmosphere (see Wikipedia).
I think Atmosphere should then even give the 0.1mcd/m^2 "star background" value if Atmosphere is switched off. However I would like to know the source of this value. (Schaefer? Must look again.)
I should check out the branch in the next days, but am quite busy with other stuff to dive into this myself more deeply, although it is very interesting and promising to finally have such a model in Stellarium.
@alex-w what exactly does "broken" mean? If you fail to git pull
them, it's normal: I frequently do force-pushes there (that's WIP after all). Just do git fetch
followed by git reset --hard origin/atmosphere
. If you fail to actually run Stellarium from them, I'd like to know the details (symptoms, console output, ...).
@gzotti
so running the scatter model for sun and moon in nadir would not find it. It is caused by physical processes (recombination etc) in the higher atmosphere (see Wikipedia).
No, I mean supply Sun & Moon in nadir to Skybright::setSunMoon()
, not to the scattering model. Schaefer's model isn't physically based, so actual physical cause of airglow shouldn't affect it.
@alex-w
BTW, to test actual WIP atmosphere instead of looking at the Preetham's model be sure to turn it on by adding atmosphere_model = bruneton
into the [landscape]
section of config.ini
. To revert to Preetham's delete this line or replace bruneton
with preetham
.
Hmm... when I started following this project I explored the (unfortunately underdocumented) code and detected both Preetham and Schaefer models which assured me it's the right project to improve. However, I postponed improving it to a later date, concentrated on astronomical models first. If you have access to Schaefer's paper (1993?) you will find his descriptions. It is a simple analytic model that includes a greatly simplified solar activity term (11 years period, should go with sunspot cycle at a later day) and seasonal term (with terrestrial latitude of observer) which I think is not yet computed from Stellarium's planet positions. I had plans to do something in gz_AtmosphereTweaks branch (which includes an exploratory GUI for the Preetham constants), but had no more time. If the various components (MilkyWay, ZodiacalLight, Atmosphere, landscape's light pollution texture) can deliver sky brightness, it would be welcome in some research projects, and I think the atmosphere simulation is the most ambitious of these. (Solving localized light pollution near cities with e.g. Garstang's model may be helpful at a later date.)
BTW my attempts at twilight time-lapses are https://1drv.ms/v/s!AuJ_d8wlPcFpiUvPPwMWAC98lSEv https://1drv.ms/v/s!AuJ_d8wlPcFpiUyXd4erpuD0Zx3a https://1drv.ms/v/s!AuJ_d8wlPcFpiUgnEgQRRe95d3Q1 https://1drv.ms/v/s!AuJ_d8wlPcFpiUnJNtquuDrf15yR https://1drv.ms/v/s!AuJ_d8wlPcFpiUrGLVwnfH6I8zYs
I never was able to fight the flicker completely, though. The location was in Namibian highlands around 1700 masl. The brightest strip of twilight was terribly bright and a big challenge to photograph without overexposure, OTOH this made the dark sky too dark in wide-angle shots. The eye still experiences a deeper view.
What is very apparent is the notable shift in color temperature. Night shots look best at 4200K, and in Nautical twilight (but I would have to find more details) there is a rapid increase towards the "normal" 5600K (or even 6500?) daylight sky. Photos look really bad if I would keep that constant.
The timelapses are nice, even though flickery. At least they do give some info on whether the very sharp border of the twilight is actually realistic. I was in doubt when I saw them in PAS output. Compare e.g. the following screenshot with your second video in the middle of 6th second (luminance is still taken from Skybright, not from PAS, so visibility of stars is different):
What made me wonder is that the starful sky appears so suddenly. Didn't expect this at all. Neither Stellarium, nor other timelapses I saw gave me such an impression (although I hadn't seen any timelapses of twilight until now).
What is very apparent is the notable shift in color temperature. Night shots look best at 4200K, and in Nautical twilight (but I would have to find more details) there is a rapid increase towards the "normal" 5600K (or even 6500?) daylight sky. Photos look really bad if I would keep that constant.
But these timelapses haven't been white-balance-adjusted as time progresses, have they? It'd be quite interesting to see how the colors change with constant white balance settings.
Well, twilight in the Tropics is unbelievably fast... I processed the first video only today, and yes, they were temperature-adjusted, and I think the others as well (I did them in 2014, though...). I can re-create that with 4200/5200/5600/6500, will send them later. Your work is finally encouraging me to play with these images again. I should find e.g. times of sunrise or sunset and appearance of stars etc. I find the sequence from Milky Way, shadow, sun coming to the landscape very good. Note that the sharp earth shadow is best visible in clear sky. This is a wonderful highland site. It is far less visible in humid conditions.
@10110111 I've checked branch with conflicts, it's OK now.
I just built it. Phenomenal view, congratulations! The twilight colours are well represented, also the change in overall hue. This is soo much better! Tiny bugfix for Windows/MSVC: you must #include < array > in AtmosphereBruneton.cpp. And please include the .hpp files in the src/CMakeLists.txt.
For reference, the rising Venus video (vertical) is 2014-05-27, and milky way twilight is 2014-05-28.
Tiny bugfix for Windows/MSVC: you must #include in AtmosphereBruneton.cpp
Include what?
@10110111 Please see https://github.com/10110111/stellarium/pull/1/files
Sorry, MD markup rendering swallowed this. < array > .
Here are the timelapses with fixed color temperatures: 4200: https://1drv.ms/v/s!AuJ_d8wlPcFpiU2uaFWGP6_YXcAo 5200: https://1drv.ms/v/s!AuJ_d8wlPcFpiU_0tUBPKZr9ayry 5600: https://1drv.ms/v/s!AuJ_d8wlPcFpiVCra-9cdhPjN-d6 6500: https://1drv.ms/v/s!AuJ_d8wlPcFpiU5V0E3i9yECg004
Brightness changes so fast that it is a real challenge to not overexpose, and I should further reduce flicker by exposure correction. But you should get an idea. Interestingly, the new model seems to balance that already.
The new model is very colorful and represents the sky at that excellent site very well. What about other atmosphere conditions, a bit more humidity, dust or so? Which parameters would there be to tweak?
On my site there are some more twilight photos from the Libyan desert: https://homepage.univie.ac.at/Georg.Zotti/ Click on Astro: Travelling, Photos: WAA Libya 2006.
Interestingly, the new model seems to balance that already
You're watching the hybrid of PAS for colors and per-pixel luminance, Schaefer's model (class Skybright
) for average luminance, and StelToneReproducer
in the form of an adapted version of xyYToRGB.glsl
for tone mapping. I'm currently working on taking the brightness from the scene instead of Skybright
, don't yet know what I'll get :)
What about other atmosphere conditions, a bit more humidity, dust or so?
The parameters in the PAS demo are not too user-friendly: like Mie & Rayleigh scattering cross-sections, density of Rayleigh and Mie scatterers as a function of altitude, ozone density profile. All the user-friendly parameters like temperature, humidity, dust etc. will have to be added after the model itself is fully integrated (and maybe some controls for these low-level parameters added). Moreover, it seems actually not trivial to make it user-friendly: temperature on the ground may not give any clue as to temperatures on higher levels (it may not even change monotonically — even in the troposphere), humidity may not give us aerosol density distribution, nor drop size distribution. Dust may have various spectra of absorption and, similarly to aerosols, different distributions of particle sizes and concentration. This is a separate (and seems quite large) topic for research.
Indeed. There are freaky conditions like volcanic dust that let a blue sun through. We surely cannot model every possibility, and also temperature inversions should cause mirages etc. However, maybe it is possible to specify a few "typical" sets for "clear mountain air", "clear lowland air" (after the rain of the cold front has cleared the atmosphere from dust), "seaside" (clean, but salty aerosoles in low altitudes), "desert" (dusty), "murky lowland air" (hot&humid summer), "city" (smog)? Not sure how variable Ozone can be? It is an important factor, for sure. Or how variable temperature gradients influences the result? And when playing with such model conditions, of course that would invite modelling approximate atmospheres of Mars and Titan... :-) I don't know the details of this model yet, so maybe I am overexpecting. A dropdown list of a few prepared datasets would be fine, I think (such localized condition keys could be specified in landscape.ini files), and maybe a section in the User Guide how to develop own data sets (for real experts who know what Mie scattering means).
The reason for temperature and pressure settings in the current GUI is that the refraction formula includes it, and this also gives only average or model atmosphere conditions, no inversions, no Novaya Semlya effect... These are simple analytic formulae typically developed before regular meteorology ballooning started, so people only had their outdoor thermometer and wall barometer. They avoid detailed layered or cell-wise refraction computations. I had intended to link our extinction coefficient to turbidity in Preetham's model, but you surely know it breaks down for very clear sky, so I postponed my atmospheric improvement attempts to some later day.
Brightness changes so fast that it is a real challenge to not overexpose, and I should further reduce flicker by exposure correction
Looks like the flicker is uncorrectable: there's a periodic hue change, which seems unrelated to actual colors. See this normalized version of your 6500K video: https://youtu.be/T-SjMZPvi7w — notice the reddish thing move up on second 2 and again on second 3. I don't think it's really physical, it must be some nonlinearity in the sensor (or maybe post-processing went wrong?).
I've noticed something similar with my camera when I tried 20s and 30s exposure: the results differed in hue, even in "Neutral" profile in RawTherapee. Still dunno how to correct this, and how to even make sure what color there actually is (in absolute colorimetric sense). I can't even measure the color with my relatively inexpensive spectrometer due to very low lighting.
I can provide you with the RAW (CR2) images if you like, also for that morning twilight. But i also had the impression there is something wrong when I apply exposure correction, and that this reddened the images a bit. I should work with contrast, exposure, color temperature and likely (empirical) color shift, and then some deflicker filter in VirtualDub (or is there a better free tool?). It is difficult to get that right, and the human visual system really adapts in this short hour or so from scotopic to mesopic to photopic vision. Scotopic is a bit more blue-sensitive, therefore night views look better when processed with 4200K. There have been papers from Applied Optics (I think) about these things. Must dig them out.
On the new sky model: I noticed what may be related to the steppiness you mentioned at Bruneton's site, that the twilight area increases step-wise. Can this be remedied by higher tessellation or adding more cells in the computation, and at what cost?
I will try to compile this on my 10year old notebook with CoreDuo and Geforce 9800 on Ubuntu. I hope the following can be done: we support OpenGL ES2 systems (ARM SBC like Raspberry Pi3, Odroid C1 or so), and ANGLE on Windows basically provides OpenGL ES2 level support on Windows. On these systems the new model will likely not run (OpenGL3 required), but it should compile and just discover missing OpenGL features at runtime, and fall back to the Preetham model.
It is still cloning the git... will report later.
Hm, I have a problem here. log.txt Geforce M9800 has OpenGL3.3, but something in the shader does not compile. Or do we need OpenGL4 here? It also seems a file textures/atmosphere/mie_scattering.dat is missing. The same warning is also in the logfile on Windows, though. This would explain the "pristine mountain air" simulation :-)
I can provide you with the RAW (CR2) images if you like, also for that morning twilight.
Yeah, RAW would be the best variant.
On the new sky model: I noticed what may be related to the steppiness you mentioned at Bruneton's site, that the twilight area increases step-wise. Can this be remedied by higher tessellation or adding more cells in the computation, and at what cost?
What you've noticed is another issue, this one is indeed fixable — that's what takes at least 128MiB texture I was talking above to be tolerable. The twilight width steppiness stems from very steep luminance dependence on zenith angle, and in addition "moving" of this front towards horizon with Sun's elevation change, which can't be approximated well by linear interpolation. I've tried changing parametrization to make Sun's elevation sampled more densely in -15°÷0° region, but this still doesn't give satisfactory result with small texture sizes.
What was discussed on Bruneton's issue tracker is a different issue, which doesn't go away even at crazy resolutions (2048 texels in μₛ vs the default 32) where the twilight steps are not noticeable at all.
You can download this ~128MiB texture set from here. Just put the *.dat
files to STELLARIUM_PREFIX/share/stellarium/textures/atmosphere
, replacing the files already present there.
we support OpenGL ES2 systems (ARM SBC like Raspberry Pi3, Odroid C1 or so), and ANGLE on Windows basically provides OpenGL ES2 level support on Windows.
Current code in the repo assumes the minimal system requirements listed on Stellarium web site — namely, OpenGL 3.0. This at least means glTexImage3D
, which is currently obtained by QOpenGLContext::getProcAddress
. Not sure if this will work on ES 2/ANGLE, since ES 2 doesn't support this command. If it doesn't, you'll get assertion failure. I can change it to throw an exception, so that LandscapeMgr
can catch it and create Preetham's model instead.
Geforce M9800 has OpenGL3.3, but something in the shader does not compile. Or do we need OpenGL4 here?
No, 3.0 should be sufficient. The error message looks strange though: looks like it refuses to accept tan(constant_expression)
or cos(constant_expression)
as constant expressions. To pass through this try removing const
before vec2 sun_size
.
It also seems a file textures/atmosphere/mie_scattering.dat is missing.
Well, it's sort of missing. The texture set can have Mie scattering data squeezed into alpha channel of the Rayleigh texture, which will let you use roughly 2× smaller memory, but at the expense of quality (namely, low Sun results in sky somewhat yellower than it should be (it should be redder)).
As for constant expressions, here's what the GLSL 1.30.10 spec says:
Constant Expressions A constant expression is one of <...> • a built-in function call whose arguments are all constant expressions, with the exception of the texture lookup functions, the noise functions, and
ftransform
. The built-in functionsdFdx
,dFdy
, andfwidth
must return 0 when evaluated inside an initializer with an argument that is a constant expression.
So I suppose you've come across a driver bug. Is your driver the latest one?
I did apt-get update and dist-upgrade to get Ubuntu 18.04.2LTS. NVidia drivers are 340, see glxinfo: glxinfo.txt I remember dimly that the 340 drivers are the last for Geforce 9800, and they were also updated, number is now 340.107.
Yes, the const vec2 sun_size was the problem. So, this works now well on a 10 year old (then "gaming class", OK) notebook with OpenGL 3.3!
A short test on my AMD A4 netbook (2014, OpenGL4.5; a barely bearable travel companion that nevertheless guides my scope). Works, but has a strange noise pattern below the horizon. This is the "don't care" area, sure. It just looks differently from the other systems. The logfile says nothing useful.
Works, but has a strange noise pattern below the horizon
Hmm, that's strange. Looks like you're using the latest version of my branch, right? It should show the reflected view of the sky in the ground directions. (Previously it rendered black ground with aerial perspective.) It's unexpected that you get some precision problems here. What's your altitude setting?
OK, I see the problem on an Intel Haswell GPU when I set altitude to 0.
The view of new evening sky is incredible! I hope in future some preset for atmosphere on other planets will be available too.
Yes, I am also very much enjoying this development. Two days ago I compared twilight view out of my window. Earth shadow was OK (dirtier in my sky), but I think Stellarium screen was too bright. Tonemapping? It looks clear now, I will try to observe again tonight.
I indeed had 0m altitude on the netbook, and branch atmosphere from yesterday evening. positive altitude makes it go away immediately. Cloning now on my Intel i5-4570, will report later.
Hmm, not sure about negative altitude. There are some areas (e.g. Dead Sea) which are below sea level. The border line apparent for locations with positive altitudes is hardly noticeable here.
Two days ago I compared twilight view out of my window. Earth shadow was OK (dirtier in my sky), but I think Stellarium screen was too bright. Tonemapping?
Might be tonemapping, but may also be inadequate parameters of the model. In my location, e.g., it always seems too me that the daytime sky in Stellarium is too blue, compared to more cyanish color of actual sky. But then, it's very humid here, and I can see very thin clouds passing in the sky at night, when I'm doing timelapse photos (light pollution lights them up), even though to the naked eye the sky seems clear. So e.g. aerosols may be changing hue of the sky in my case.
Hmm, not sure about negative altitude
Well, this model assumes perfectly spherical Earth and atmosphere (this removes latitude & longitude degrees of freedom, thus significantly reducing texture sizes). I'm not even sure what to supply there if not the altitude passed from LandscapeMgr
to get more or less correct atmosphere. One possible way is to clamp altitude to some non-negative value.
Here is a logfile from a poor pure ANGLE system. A VAIO notebook with dynamic hybrid graphics, In principle a Geforce 330, but Sony never provided a new driver after 2010, so we are limited to OpenGL 2.1 and due to bugs to ANGLE (GL ES 2.0). Clearly this fails. In the GUI which we will need (I guess we (Alex and I) can create this) we must test whether the new model should be made available based on some GPU queries, but also the attempt to run the new model (e.g. configure to Bruneton in an OpenGL run, then run with ANGLE?) should be caught gracefully. log.txt
I've added more graceful handling of failures to initialize Bruneton's model, with a fallback to Preetham's. Please check that it works instead of crashing.
Yes, that works, thanks.
I just wanted to run two programs synchronized via RemoteSync plugin to see the difference. Had to fix #642 first. May be helpful for comparisons. You need two separate user directories and run one with --user-dir command-line option, and configure the models differently.
@10110111 Here are the RAW files of the zodiacal light/Venus/twilight scene. I can provide the other RAWs (~4GB) after you confirm download of this. https://1drv.ms/u/s!AuJ_d8wlPcFpiglPkSZss1vwExPm The EXIF information should contain most information, but the manual 14mm lens did not provide e.g. f/stop information, so vignetting may change unexpectedly when there should be a change in exposure time. However, this should not influence color balance. I fear now that color balance may be different at different ISO settings. I have started calibrating/balancing them in Canon DPP, so color temperature differs between images.
I can provide the other RAWs (~4GB) after you confirm download of this.
I've downloaded this pack, thanks.
I have started calibrating/balancing them in Canon DPP, so color temperature differs between images.
Does this procedure alter the embedded JPG's in the CR2's? I thought all RAW processing programs don't write into the RAW files...
Does this procedure alter the embedded JPG's in the CR2's? I thought all RAW processing programs don't write into the RAW files...
Good question. The preview thumbnail in DPP changes, but I think e.g. Irfanview shows the "original" preview without applying new settings. As I understand, DPP adds a "processing recipe" into the CR2, which is just a TIF variant with proprietary TIFF tags. Indeed, other processing packs add an XMP sidecar file. Given that the original data are not changed, both ways of processing are OK. Further details about CR2 and DPP should be available online, would have to dig into it myself.
Here is the other: https://1drv.ms/u/s!AuJ_d8wlPcFpigryg8OaV3D6zCdA
Note that these were taken with a Canon 60Da, the astrophoto camera with higher red sensitivity. Color balance is a bit off for some materials, but in general it is still quite OK. Again, no f/stop data, but ISO and exposure time should be available in EXIF, and color temperature, contrast, exposure correction may have been changed in DPP.
Hmm, the transition from IMG_1900
to IMG_1901
is quite abrupt. I suppose only aperture changed in this case, but the colors appear quite different. I've even tried taking the raw data, converting them to RGB and normalizing to leave only chromaticity data in the image, and it indeed changes abruptly.
This makes it hard to understand what colors there should actually be: it may be that previous photos in the series are off, but it could be that the subsequent ones — or both...
If I change, for all images, "Image Style" to "standard" or "neutral", color temperature to e.g. 5200 and set contrast to low in DPP, the image colors become quite similar or continuous. But indeed it seems I changed f/stop between those two, as brightness is higher in the earlier photo. The auto-white balance made a huge jump between 1884/1885, so manual color temperature is important. The custom image style "Flaat" was part of Magic Lantern, but I don't know what it really does in addition to low contrast, so better switch away from that.
I think dcraw should be able to get the really raw sensor data (even pre-bayer) if all else fails.
I think dcraw should be able to get the really raw sensor data (even pre-bayer) if all else fails.
I used LibRaw, but that's the same level. The raw data themselves are inconsistent between exposures. I've actually asked a question on Photo.StackExchange about a similar problem. Might get some hints about the reason of/workarounds for this issue there if it gets answered.
Moving discussion started in #623 here.
Are they published anywhere? Would be interesting to compare the model with real-world photos.
This has an issue. Namely, the precomputation stage is quite resource-hungry: on my nVidia GeForce GTX 750Ti it takes several tens of seconds to precompute the necessary textures, yielding 128MiB main texture when I set up constants for a decent resolution (otherwise deep twilight is blocky in azimuth and jumpy in time, see this animation; if you don't notice jumpiness, try watching it in a lower speed).
Moreover, until I added a couple of
glFinish
calls in my fork of PAS (branch "stellarium
"), precomputation made my X11 session unusable (frozen mouse cursor etc.), and sometimes locked up X server indefinitely. Also, sometimes the results were garbage (e.g. missing red channel — I suppose this is due to some nvidia driver bugs, but might be races in the implementation).Also, since a decent-resolution main texture is at least 128MiB, precomputation stage takes multiple times that amount of VRAM, so this looks unfriendly to mobile devices.
So, I'm not sure if we really want the user to have an option to change parameters from within Stellarium GUI.
Well, if you insist, OK. Currently my WIP
atmosphere
branch of Stellarium simply hacks into classAtmosphere
.This might well be possible, but for each new planed we need its own profile of atmospheric density, scattering&absorption cross-sections, Mie scattering phase functions (based on a sensible particle size distribution) etc.. With this data we can indeed try to precompute corresponding textures (maybe increasing number of orders of scattering).