Closed bss116 closed 4 years ago
added iwallmoist = 2 in 201. test results are the same, 201 continues to work locally and breaks on HPC.
I made a few small edits to example sims docs. Mainly just adding a bit of detail to the thermal BCs/ scalar sources etc.
@bss116 is the problem with 201 running locally something we have already tried to solve?
I have also noticed a potential problem in 502 that for these driven simulations you need to ensure there is sufficient space between the last block in the stremwise direction and the end of the domain to avoid problems with the convective outflow BCs. I didn't notice this as a problem when running this case so seems okay but maybe good to increase the buffer layer (pad
in namoptions) for good practice.
@bss116 is the problem with 201 running locally something we have already tried to solve?
Running on my Mac works fine, the problem is on HPC (#82). I have also noticed a potential problem in 502 that for these driven simulations you need to ensure there is sufficient space between the last block in the stremwise direction and the end of the domain to avoid problems with the convective outflow BCs. I didn't notice this as a problem when running this case so seems okay but maybe good to increase the buffer layer (
pad
in namoptions) for good practice.
yeah I'm fine with that, we will only need to re-run the pre-processing for it, right? You should also mention this somewhere, e.g. in the examples document or probably even better in the pre-processing docs.
I will redo preprocessing with larger buffer and add a line to the driver simulation details I put in the docs.
@tomgrylls can you please add details to the driver parameters in 501 (example docs)? I just realised that I should have probably increased tdriverstart
and added tdriverdump
when running 501 for 1000 seconds, right? Will have to do this again for the driver files in 502.
I saw there's lots of info on the driver parameters already in the other documentation, sorry. I'll just add a couple of sentences to the example docs. I was thinking of re-running 501 on the cluster with runtime 1000 s, tdriverstart = 950, dtdriver = 1 and driverstore = 51, what do you think about that? or should we keep the input files minimal and just use driverstore = 11?
is there any restrictions on dtmax in &RUN of the driven simulation 502? does it need to be <= dtdriver of the precursor simulation 501?
I saw there's lots of info on the driver parameters already in the other documentation, sorry. I'll just add a couple of sentences to the example docs. I was thinking of re-running 501 on the cluster with runtime 1000 s, tdriverstart = 950, dtdriver = 1 and driverstore = 51, what do you think about that? or should we keep the input files minimal and just use driverstore = 11?
I think longer driver settings can be a good idea as you posted above. The driver files consist of y-z planes so their size shouldn't be too much of an issue at this scale of simulation.
is there any restrictions on dtmax in &RUN of the driven simulation 502? does it need to be <= dtdriver of the precursor simulation 501?
I think it should be fine even if it is larger than dtdriver as the code will interpolate between the nearest two timesteps in the driver files. Ideally dt = dtmax = dtdriver in both simulations to avoid the need for interpolation but this is also not necessary.
I think longer driver settings can be a good idea as you posted above. The driver files consist of y-z planes so their size shouldn't be too much of an issue at this scale of simulation.
for 50 seconds it is 1.8 MB per file, so roughly 12 MB in total. Should we go with that, or increase even further to 100 s?
is there any restrictions on dtmax in &RUN of the driven simulation 502? does it need to be <= dtdriver of the precursor simulation 501?
I think it should be fine even if it is larger than dtdriver as the code will interpolate between the nearest two timesteps in the driver files. Ideally dt = dtmax = dtdriver in both simulations to avoid the need for interpolation but this is also not necessary.
Okay, no need to add any restrictions on that then. It will still interpolate if the actual time step is below dtmax due to CFL, right? Doesn't sound like there is an easy way around interpolation.
I think longer driver settings can be a good idea as you posted above. The driver files consist of y-z planes so their size shouldn't be too much of an issue at this scale of simulation.
for 50 seconds it is 1.8 MB per file, so roughly 12 MB in total. Should we go with that, or increase even further to 100 s?
I think either is fine. Real simulations will need to do thousands of seconds so both of these is just showcasing how to do it in the example.
is there any restrictions on dtmax in &RUN of the driven simulation 502? does it need to be <= dtdriver of the precursor simulation 501?
I think it should be fine even if it is larger than dtdriver as the code will interpolate between the nearest two timesteps in the driver files. Ideally dt = dtmax = dtdriver in both simulations to avoid the need for interpolation but this is also not necessary.
Okay, no need to add any restrictions on that then. It will still interpolate if the actual time step is below dtmax due to CFL, right? Doesn't sound like there is an easy way around interpolation.
Yes the timestep being below dtmax also fine. The way to avoid interpolation is to set dtmax in both simulations and dtdriver in 501 to the same value below the minimum dt that we get in 501. The time step will then be constant and driver plane will be written every timestep. This is easier to do when you already know what the timestep is in the driver simulation - which we do.
For example if the minimum timestep in 501 is 0.55 seconds then we run it now with dtmax = 0.5 and dtdriver = 0.5. Then. use these drivers for 502 with dtmax = 0.5 again.
for 50 seconds it is 1.8 MB per file, so roughly 12 MB in total. Should we go with that, or increase even further to 100 s?
I think either is fine. Real simulations will need to do thousands of seconds so both of these is just showcasing how to do it in the example.
Good point. I'll add this as remark to the example simulations.
I think it should be fine even if it is larger than dtdriver as the code will interpolate between the nearest two timesteps in the driver files. Ideally dt = dtmax = dtdriver in both simulations to avoid the need for interpolation but this is also not necessary.
Okay, no need to add any restrictions on that then. It will still interpolate if the actual time step is below dtmax due to CFL, right? Doesn't sound like there is an easy way around interpolation.
Yes the timestep being below dtmax also fine. The way to avoid interpolation is to set dtmax in both simulations and dtdriver in 501 to the same value below the minimum dt that we get in 501. The time step will then be constant and driver plane will be written every timestep. This is easier to do when you already know what the timestep is in the driver simulation - which we do.
For example if the minimum timestep in 501 is 0.55 seconds then we run it now with dtmax = 0.5 and dtdriver = 0.5. Then. use these drivers for 502 with dtmax = 0.5 again.
This is now going into the details, but could we also use a larger dtmax in the driver simulation and only restrict dtdriver? would that mean it uses the larger dt before tdriverstart and goes to smaller timesteps later? e.g. in 501 dtmax = 2, dtdriver = 0.5? I guess if not we could get the same result by running a warmstart simulation for the driver...
I have now run 502 with driver inputs for 50 s at 0.5 s time steps. The dt in 501 after 1000s actually went down to 0.28, and in 502 the dt is only 0.17, therefore we still need to interpolate the driver input. after 50 s runtime, the turbulence hasn't quite reached the end of the domain yet. Do we care, or is it alright like that for the example case?
I am happy for this to stay as it is - with driver fields interpolated and inflow not reaching the edge of the domain. But equally we could just set dtmax = 0.1 for both simulations and run for 100 s! I am happy either way
I will leave it as it is for now, but we can change it anytime later -- see https://github.com/uDALES/u-dales/pull/84#issuecomment-631319096.
Add example simulation setups in the
examples
folder with the namoptions reduced to a minimal version needed for the specific case. These cases should include:neutral stability case with blocks -- which forcing? one case for each forcing?
non-neutral case with temperature -- different forcings?
scalar release case
full energy balance
please expand the list!