Open RJVogel opened 2 years ago
You should also post it at the Forum. Do you have any other reason to suspect the LSM versus other physics? Can you map the difference at some time? Is it confined to the surface and/or soil?
On Fri, Jul 8, 2022 at 12:30 PM Julian Vogel @.***> wrote:
After switching from WRF 4.3.3 to 4.4, the simulation output changed significantly: Large positive biases to T2 temperature from weather stations are found. The case is a real case in Germany and a comparison to 20 DWD weather stations was performed. I used NoahMP as LSM, so I assume it could be related to the latest changes, although I am not aware that any of these changes are specifically important to my setup. I have previously ran this case with many different WRF versions, different grids and different parameterizations and the results were always more or less consistent, until I switched to 4.4.
The code compiled fine in all cases using the same libraries and the same gfortran compiler.
The expected metrics of the average of 12 rural weather stations compared to measurements look like this in WRF 4.3.1 and 4.3.3:
| MBE | RMSE |
T2 / K | -0.069 | 1.476 | Q2 / g/kg | -0.530 | 1.132 | WS10 / m/s | 0.441 | 1.118 |
In WRF 4.4, the metrics show a large positive bias in T2:
| MBE | RMSE |
T2 / K | 0.940 | 2.004 | Q2 / g/kg | -0.692 | 1.270 | WS10 / m/s | 0.574 | 1.189 |
A difference is also seen comparing the WRF output to sounding data from 3 stations. The visually inspected simulation output looks fine in both 4.3.1 and 4.4, but clearly different.
Attached is my namelist.input: namelist.input.txt https://github.com/wrf-model/WRF/files/9074254/namelist.input.txt
I hope this is the right place and form to post this issue.
— Reply to this email directly, view it on GitHub https://github.com/wrf-model/WRF/issues/1747, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEIZ77EA4GK72PN3TTZAXTDVTBXTXANCNFSM53BZIYEQ . You are receiving this because you are subscribed to this thread.Message ID: @.***>
Thanks for the quick reply. I will also post it in the forum.
I first suspected the urban physics, because my first test with WRF 4.4 used the BEP model and because there have been changes in the urban physics in the 4.3 bugfix releases and also in 4.4. But then I tested another case without urban model and had a similar T2 bias. Both cases used different PBL and surface layer models. Microphysics, cumulus, radiation and LSM were the same. I suspected NoahMP only because of the recent changes there and because I could not identify changes in the other physics options I was using.
I am going to run some more tests next week and I will also try to visualize the differences in the results.
Future discussion could be addressed initially to @.*** as people subscribed to this repo don't want to see long discussion threads here.
On Fri, Jul 8, 2022 at 3:21 PM Julian Vogel @.***> wrote:
Thanks for the quick reply. I will also post it in the forum.
I first suspected the urban physics, because my first test with WRF 4.4 used the BEP model and because there have been changes in the urban physics in the 4.3 bugfix releases and also in 4.4. But then I tested another case without urban model and had a similar T2 bias. Both cases used different PBL and surface layer models. Microphysics, cumulus, radiation and LSM were the same. I suspected NoahMP only because of the recent changes there and because I could not identify changes in the other physics options I was using.
I am going to run some more tests next week and I will also try to visualize the differences in the results.
— Reply to this email directly, view it on GitHub https://github.com/wrf-model/WRF/issues/1747#issuecomment-1179368511, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEIZ77G2P4DMGFC4PTBSZITVTCLWPANCNFSM53BZIYEQ . You are receiving this because you commented.Message ID: @.***>
There are no significant code changes directly related to temperature. Most code changes from v4.3 to v4.4 in NoahMP are bug fixes and improvements in snowpack processes and tile drainage/irrigation schemes. One potential factor would be that we added the canopy heat storage in the vegetation temperature calculations, but not sure if this is the cause for your warm bias. Is your case in snow points (cold season)?
OK, so it is not NoahMP issue. Looks like some updates in PBL or radiation/cloud? Have you look at the downward shortwave radiation?
I deleted my comments because I am not able to replicate it with a fresh compile of V4.3, although I am very concerned as to how a binary compiled a couple of years ago with an almost identical stack can have such wildly different results to one compiled today... just a few Intel Compiler updates, OS updates, MPI changes...
@cenlinhe My case is in the summer season in June with little clouds and no rain or snow.
@Plantain I already tested setting ghg_input=1 and ghg_input=0, both results were similar for me. Although I don't know if the changes in #1625 might have had other effects. I have also observed less clouds in my 4.4 runs.
I tested the Noah LSM and the differences between WRF 4.3.3 and WRF 4.4 are much less and I don't observe a general warm bias in T2. However, the outputs are still not the same and especially soil temperatures show a small warm bias of 0.2 K ... 0.3 K (but in this case it brings the 4.4 results closer to measurements).
Here is again the comparison of my outputs to some measurement stations using Noah LSM:
WRF 4.3.3 | MBE | RMSE |
--------------------------------
T2 | -0.155 | 1.795 |
Q2 | 0.000 | 1.029 |
WS10 | -0.260 | 1.269 |
TS005 | -1.629 | 3.426 |
TS020 | -0.201 | 2.450 |
WRF 4.4 | MBE | RMSE |
--------------------------------
T2 | -0.161 | 1.790 |
Q2 | -0.002 | 1.064 |
WS10 | -0.088 | 1.248 |
TS005 | -1.319 | 3.194 |
TS020 | 0.071 | 2.466 |
In my case, a significant warm bias is seen only for NoahMP and not for Noah.
I did an offline Noah-MP simulation using the same atmospheric forcing and initial conditions but different versions (v4.3 and v4.4) over the continental US. In general, because of adding the canopy heat storage in v4.4, Noah-MP v4.4 showed a warm bias (~0.1K) in winter over Forest regions. In summer, the small warm bias (~0.1K) is mainly over grasslands with some slight warm bias (<0.05K) over Deciduous Broadleaf Forest and Mixed Forest.
We still may do a full test of those versions here with the namelist discussed. Just checking our person resources.
On Tue, Jul 12, 2022 at 3:55 PM Cenlin_He @.***> wrote:
I did an offline Noah-MP simulation using the same atmospheric forcing and initial conditions but different versions (v4.3 and v4.4) over the continental US. In general, because of adding the canopy heat storage in v4.4, Noah-MP v4.4 showed a warm bias (~0.1K) in winter over Forest regions. In summer, the small warm bias (~0.1K) is mainly over grasslands with some slight warm bias (<0.05K) over Deciduous Broadleaf Forest and Mixed Forest.
— Reply to this email directly, view it on GitHub https://github.com/wrf-model/WRF/issues/1747#issuecomment-1182536272, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEIZ77A7QTC7WU6ZZFRDMZ3VTXSVDANCNFSM53BZIYEQ . You are receiving this because you commented.Message ID: @.***>
@RJVogel remind us again of the period and forecast length for your verification.
@dudhia my case was a 13 day period from 21 June 2010 until 4 July 2010 around Berlin, Germany. The namelist is attached in my initial post. I could also provide output files or additional config files, if needed.
Our test over the US shows a slight increase in the minimum of 0.4 K, and no change in the maximum. Do you know if your difference is from the max or min?
Our test shows multiple degrees warmer in local forest regions with less cooling on clear nights. This accounts for most of the mean change in T2.
Okay, that's interesting.
The difference in my simulation was the average calculated at the locations of 18 weather stations in urban and rural areas. I could also directly compare the two outputs of 4.3.3 and 4.4 over the whole domain for my test case and calculate the min and max of temp. difference.
After switching from WRF 4.3.3 to 4.4, the simulation output changed significantly: Large positive biases to T2 temperature from weather stations are found. The case is a real case in Germany and a comparison to 20 DWD weather stations was performed. I used NoahMP as LSM, so I assume it could be related to the latest changes, although I am not aware that any of these changes are specifically important to my setup. I have previously ran this case with many different WRF versions, different grids and different parameterizations and the results were always more or less consistent, until I switched to 4.4.
The code compiled fine in all cases using the same libraries and the same gfortran compiler.
The expected metrics of the average of 12 rural weather stations compared to measurements look like this in WRF 4.3.1 and 4.3.3:
In WRF 4.4, the metrics show a large positive bias in T2:
A difference is also seen comparing the WRF output to sounding data from 3 stations. The visually inspected simulation output looks fine in both 4.3.1 and 4.4, but clearly different.
Attached is my namelist.input: namelist.input.txt
I hope this is the right place and form to post this issue.