uDALES / u-dales

uDALES: large-eddy-simulation software for urban flow, dispersion and microclimate modelling
https://udales.github.io/u-dales
GNU General Public License v3.0
53 stars 17 forks source link

Add example simulations #39

Closed bss116 closed 4 years ago

bss116 commented 4 years ago

Add example simulation setups in the examples folder with the namoptions reduced to a minimal version needed for the specific case. These cases should include:

please expand the list!

bss116 commented 4 years ago

I will update the example simulation with new namoptions for flow rate forcing and check the neutral simulations. @samoliverowens or @ivosuter can you take care of a full EB example? Further examples can be added later on.

bss116 commented 4 years ago

I opened up a new branch for the example simulations: https://github.com/uDALES/u-dales/tree/bss/example-simulations. More details on different options that could be included in the examples. I would suggest to lump a few of these together:

  1. building layouts:

    • canyon
    • staggered/aligned
    • complex layout: either realistic geometries or from urban landscape generator
  2. zgrid:

    • equidistant
    • stretched
  3. lateral boundary conditions:

    • periodic
    • inflow/outflow
  4. forcing (options for u, v, u+v):

    • pressure gradient
    • flow rate
    • free stream velocity
  5. scalars:

    • none
    • point source
    • line source
  6. temperature and buoyancy:

    • none (neutral simulation)
    • various options for temperature and buoyancy forcings (I'm not familiar with these...)
    • full energy balance
samoliverowens commented 4 years ago

I've just uploaded a set of example simulations differing by geometry and whether the energy balance is on. The rest of the parameters don't correspond to any particular setup yet, and are meant to be changed.

bss116 commented 4 years ago

Great! With the cleaned-up namoptions it will be easier to set up nicely the different options. There are only a few parameters we need to set for every simulation, the others then depend on the setup. Here is a minimal list, example for neutral simulation with volume flow-rate forcing:

&RUN iexpnr = 503 runtime = 14 trestart = 13 ladaptive = .true. randu = 0.01 /

&OUTPUT lxytdump = .true. tsample = 2.0 tstatsdump = 10 /

&DOMAIN imax = 64 jtot = 64 kmax = 64 xsize = 128 ysize = 128 /

&BC wtsurf = 0.0 wqsurf = 0.0 thls = 288.0 z0 = 0.01 z0h = 6.7e-05 /

(these are needed for the bottom subroutine!)

&WALLS nblocks = 41 nfcts = 57
iwallmom = 3 /

&PHYSICS ps = 101500.0 igrw_damp = 0 uflowrate = 1.5 vflowrate = 1.5 luvolflowr = .true. lvvolflowr = .true. /

&DYNAMICS ipoiss = 0 /

&NAMSUBGRID lvreman = .true. /

bss116 commented 4 years ago

@samoliverowens I moved your documentation on the boundary conditions to this branch, because I think it is a great start for guidelines on how to set up simulation-parameters. I also added the neutral simulation described above as example setup 503, and a file namoptions.xxx that has all switches to default and sorted as currently in the namoptions makeover (#64). Once everyone is happy with the new categorisation and the PR approved, we can use this as a basis for setting up the examples.

bss116 commented 4 years ago

@samoliverowens I plotted the layouts and it seems that there is an error in example 300: there is one floor facet that overlaps with the last block (41 56 57 64 1 16): 41 56 41 64 0 0 34 0 0 0 0

bss116 commented 4 years ago

My suggestion for example setups:

  1. the most basic. lcube (100), periodic lateral BC, a constant pressure gradient in x
  2. lcanyons (200) with volume flow forcing in u and v, some scalars
  3. lstaggered with small cubes (similar setup to 150), some temperature (and a larger domain if needed for that)
  4. lblocksfile (500), a stretched z grid, inflow/outflow (can we always do this, or do we need a driver simulation + driver layout?)
  5. llidar (900), energy balance

What do you think about that? We can of course always add more features to it, this is just a start to get the geometry and basic switches set up.

samoliverowens commented 4 years ago

@samoliverowens I plotted the layouts and it seems that there is an error in example 300: there is one floor facet that overlaps with the last block (41 56 57 64 1 16): 41 56 41 64 0 0 34 0 0 0 0

I agree that's not what we want, though I just checked and it matches what the old pre-processing routines produce, so it looks like a problem with how createfloors works for this case. @ivosuter or @tomgrylls may be able to shed some light on it.

samoliverowens commented 4 years ago

My suggestion for example setups:

  1. the most basic. lcube (100), periodic lateral BC, a constant pressure gradient in x
  2. lcanyons (200) with volume flow forcing in u and v, some scalars
  3. lstaggered with small cubes (similar setup to 150), some temperature (and a larger domain if needed for that)
  4. lblocksfile (500), a stretched z grid, inflow/outflow (can we always do this, or do we need a driver simulation + driver layout?)
  5. llidar (900), energy balance

What do you think about that? We can of course always add more features to it, this is just a start to get the geometry and basic switches set up.

That sounds good, and with 5. we should have temperature & buoyancy on too.

tomgrylls commented 4 years ago

I see the benefit of having a limited number of simulations of increasing complexity as opposed to a comprehensive set of them that systematically show the user all options (this would result in a lot of simulations as there are a lot of variables - see below). I think to do it like this we need to create some kind of table/ documentation for these simulations where it is v. clear to the user what is changing across them. The main variables being:

Other variables being:

The above set of 5 simulations can provide a good coverage of these main variables. I would suggest we do not number them like this if the numbering does not have some systematic meaning. We could come up with some simple system (e.g. 0xx - neutral, 1xx non-neutral, 2xx energy balance, 5xx driver sims, 8xx trees etc.). To add some specificity and my recommendations to the above:

  1. 001 - Flat simulation, neutral, periodic lateral BCs, const. pressure gradient in x.
  2. 002 - lcube, neutral, periodic lateral BC, const. pressure gradient in x.
  3. 101 - lcanyons, non-neutral: isothermal BC from all IBM and top, periodic lateral BCs for all, volume flow rate forcing in x, passive scalar line sources.
  4. 102 - lstaggered, non-neutral: constant flux thermal BC from road roads, roofs and top, periodic lateral BCs for momentum and temp + inflow-outflow for passive scalar, volume flow rate forcing in x and y, two scalar point sources.
  5. 501 and 502 a) 501 - the driver simulation based off 101 with no passive scalars. b) 502 - lblocksfile non-neutral: isothermal IBM, driven simulation (inflow-outflow all), forcing: driver sim.
  6. 201 - llidar, non-neutral: energy balance, periodic BCs, coriolis forcing, nudging, stretched z-grid.

Vary domain sizes and fielddump/ statsdump outputs across all of these.

  1. 801 - tree simulation I have set-up
  2. 901 - purifier simulation I have set-up.

This does not cover all the variables discussed and ideally I think these examples should be set-up systematically by varying the code functionalities around one simple base case. But this would require a large number of example simulations that may also be messy so I think something like this sounds good.

Thoughts on this? I should be able to set all these up pretty quick? Would we want to remove what is currently in bss/example-simulations and only have the above?

dmey commented 4 years ago

Thoughts on this? I should be able to set all these up pretty quick? Would we want to remove what is currently in bss/example-simulations and only have the above?

I think this would be really good. 👍

@tomgrylls Are all the above a super set of the current tests. If this is the case, we could simply run tests based on these name lists thus making removing duplication and making testing easier to maintain. Any thoughts?

tomgrylls commented 4 years ago

The tests would be more thorough based on the above - but this would require running quite a few more separate simulations each time in testing? We could base the tests off of a limited number of these e.g. 002,501/ 502 and 201.

It depends slightly on the size of the domains/ resolutions that we set in the above. If these are relatively small for the chosen example simulations then all that would be necessary is to use the same input files but with runtime and tfielddump reduced accordingly. If the domain size has to change then so will the .inp. files in tests. I think this is a good idea but unless we change domain sizes it would mean that the example simulations are pretty low quality and with coarse resolutions and relatively small domains.

tomgrylls commented 4 years ago

@bss116 @samoliverowens I will make a start on updating the existing example simulation namoptions files in bss/example-simulations to work with the namoptions restructuring. And, as I do this, try and produce the above set of simulations. Let me know of any comments/ other things you would like to be done.

bss116 commented 4 years ago

I think all of the above sounds really good, yes feel free to remove the current example simulations, they were just for initial ideas. I wouldn't worry about the naming of the simulations too much, I'd simply call them 001 -- 006. We will write a separate document (maybe part of the setting-up simulations?) where we give an overview of the setup and required parameters/switches.

Personally I wouldn't include a flat simulation in the examples, as this is not a common case for using uDALES (but of course keeping the functionality of it, as it may be used as additional control case).

For the domain size, I suggest to keep at least one of them relatively small (maybe the neutral one with cubes) such that this can be used as a quick example/debug simulation.

dmey commented 4 years ago

It depends slightly on the size of the domains/ resolutions that we set in the above

What if we keep the size unchanged and instead run for a very short time? Would that be enough to capture some meaningful data that we can use to benchmark one uDALES version with another? If that is the case, the we can simply patch the namelist programmatically before ruining uDALES. We can do that for the domain size of course but if I remember correctly there is a bit more to it then just changing a couple of vars on the namelist!?

tomgrylls commented 4 years ago

@dmey how many cores do we run on the CI? We are never going to have a fully developed simulation as a test unless we use warmstart files (which could be a good idea to test the model in the state we are interested in - continuous, fully turbulent). If we are not going to do this then seeing as we will unlikely allow it to fully develop we may as well run simulations for small periods e.g. 5 seconds. Even with these short run times we will still want fairly small numbers of cells ~1000000 or less as computation time rises rapidly especially if we are limited to few cores.

I have made a start on the above set of example simulations. I have made 001, 002, 101, 102, 201. I still need to 1) make 501, 502, 2) adjust domain sizes, run times and outputs across these and 3) run tests of all these simulations. There may be mistakes in what I have done so far so I would wait until Tuesday before reviewing them as I will likely make changes.

For 201 I have added the image file for the LIDAR as I used this in preprocessing but the file is 9.6 MB. Do we want this in the experiment folder or elsewhere? Or at all?

dmey commented 4 years ago

@dmey how many cores do we run on the CI? We are never going to have a fully developed simulation as a test unless we use warmstart files (which could be a good idea to test the model in the state we are interested in - continuous, fully turbulent). If we are not going to do this then seeing as we will unlikely allow it to fully develop we may as well run simulations for small periods e.g. 5 seconds.

2 but just to clarify in case this is not clear, the current tests (integration and regression tests) are not designed to tell us if the model is sound but to simply check if we introduce a change that may impact the simulation results from one version/branch to another. If you introduce a change in a new commit, we check against master that that new commit has not changed the results -- obviously, given the large number of switches in the name list, one would need to run the model for all permutations and check all outputs quantities from uDALES to be sure. Here we take a more pragmatic approach and only run the model for a set of name list and check for outputs we deem important. I think it's file to run short simulations if we are looking for equality between two uDALES version (assuming that the switches for the extra parametrizations are active for that short time period). I do not think it's currently feasible in the current set up to investigate what effect changes will have after n integration time steps.

Even with these short run times we will still want fairly small numbers of cells ~1000000 or less as computation time rises rapidly especially if we are limited to few cores.

Is this something we can easily change in the name list? I.e. n_blocks? Or will we need to run some pre-processing? If it's straightforward then I can just apply a patch to the name list from the tests...

tomgrylls commented 4 years ago

@dmey yes I understand what the checks are there for. My concern is that with coarse resolution, small domains and small run times the flow does not even get close to reaching the fully developed, continuously turbulent state that the LES is intended to model. The flow generally remains laminar with a small boundary layer developing in the lowest few cells (2 or 3 cells; depending on initial pertubation/ randomisation). The tests are therefore not indicative of the typical advection and subgrid scale diffusion that we want to model. However I am sure that any change to these will still result in a change in this initial development so perhaps it does not matter anyway. If we are not concerned by this then we can run the tests for 5 s as I mentioned above. If we did feel this was a problem we could have the tests as a warmstart where they read in files from a developed simulation and then model a 5 s period but of a more realistic case.

The example simulations are all now running with a 64x64x64 domain size. I think this should be increased if we want these to be representative of typical simulations. It depends exactly what we want these examples to do @bss116? I can easily increase the size of the domain and the run time of these. The run time is not an issue for using them as tests as it only requires values in namoptions to be changed. However, changing the domain size means that different blocks.inp. files are needed which means the preprocessing must be rerun.

The latest commit on bss/example-simulations has got it to the point where all the example simulations run and where you can run the preprocessing (from so-tg/preproc) on all of these to produce these set-ups. The final adjustments to make are 1) run times, 2) outputs, 3) domain size/ block configurations. After this we may want to discuss documentation and producing some example postprocessing of them etc.

bss116 commented 4 years ago

Yeah so I would say definitely increase the size where suitable, in my opinion the example cases should be realistic simulations. We should do a mix of simulations that can be easily run on a local machine (e.g. 64x64x64 for 001 and 002) and for the non-neutral ones we should have a domain size that is justifiable, but keeping it as small as possible. However if we do this, then we can/should probably not have the same simulations for examples and test cases. But also for the tests the geometry is probably not that important. Could we even just use a single test geometry and test it with different namoption parameters...?

The namoptions of the examples still list a few unnecessary parameters (where they do not differ from the defaults), as discussed I think we should only put in the non-default switches to make it clear what has to change to set them up. I'm happy to go over them and do this. For the case of variety, shall we maybe use smaller cubes in 102 (blockwidth and canyonwidth 8 instead of 16)? Is the geometry of 502 coming from somewhere specific or is it just an example setup? If so, I would increase the building density, could use for example one of my heterogeneous layouts.

tomgrylls commented 4 years ago

Yes would be good to get rid of unnecessary namoptions. All in the &INPS section at the moment are necessary for preprocessing I think.

Agreed smaller cubes in 102 - sorry I had a note to refine the blocks across all of them but forgot to do this. blockwidth = 8 is good.

I randomly made the geometry of 502 by writing the build.502 file manually. Happy for something else to be used.

On a similar note, 201 uses LIDAR data but as the domain is quite small only has a couple of buildings.

bss116 commented 4 years ago

Yes would be good to get rid of unnecessary namoptions. All in the &INPS section at the moment are necessary for preprocessing I think.

I had a start with doing this for the neutral simulations 001, 002. Please have a quick look I didn't delete anything important. I also changed to ipoiss = 0 and iwallmom = 3. Or should we keep =2 as long as this is not thoroughly tested? If = 3, do we then still need the Tfacinit.inp at all?

Agreed smaller cubes in 102 - sorry I had a note to refine the blocks across all of them but forgot to do this. blockwidth = 8 is good.

Okay, will change this along with simplifying the namoptions.

I randomly made the geometry of 502 by writing the build.502 file manually. Happy for something else to be used.

On a similar note, 201 uses LIDAR data but as the domain is quite small only has a couple of buildings.

Yeah, the 201 domain should probably increase as well. What would you recommend as domain sizes? When we know the size, I can make us a new random geometry for 502.

bss116 commented 4 years ago

@tomgrylls do we set BCbotT = 2 in 101, or does it stay at the new default BCbotT = 1?

tomgrylls commented 4 years ago

I had a start with doing this for the neutral simulations 001, 002. Please have a quick look I didn't delete anything important. I also changed to ipoiss = 0 and iwallmom = 3. Or should we keep =2 as long as this is not thoroughly tested? If = 3, do we then still need the Tfacinit.inp at all?

Yes forgot about what we did to set up POISS_FFT that is a good change.

There is a switch in modstartup that actually changes iwallmom and BCbotm to = 3 if the simulation is neutral (ltempeq = .false.) so this doesn't matter with the current code. Perhaps it is good practice to set it to 3 - or we forget about this option (= 3) in namoptions altogether as it is automatically dealt with in the code.

Yes from what I can see by having a quick look at the code we do not need Tfacinit if iwallmom = 3 therefore if ltempeq = .false.. Perhaps we should edit the preprocessing to not write this file under these conditions - #60.

Yeah, the 201 domain should probably increase as well. What would you recommend as domain sizes? When we know the size, I can make us a new random geometry for 502.

I spoke with Maarten about domain sizes and he seemed to think all the example simulations being pretty small was a good idea. "No larger than 128x128x128." So we could scale these up to that - or just leave as 64^3 and refine the blocks?

@tomgrylls do we set BCbotT = 2 in 101, or does it stay at the new default BCbotT = 1?

BCbotT will not be used as we have blocks at the lowest level. Perhaps we could actually edit 001 to use this 'bottom' functionality? All the BCbotx should only have any effects if we do not have blocks all over level kb.

bss116 commented 4 years ago

I had a start with doing this for the neutral simulations 001, 002. Please have a quick look I didn't delete anything important. I also changed to ipoiss = 0 and iwallmom = 3. Or should we keep =2 as long as this is not thoroughly tested? If = 3, do we then still need the Tfacinit.inp at all?

Yes forgot about what we did to set up POISS_FFT that is a good change.

When we are confident enough that it works fine, we should also set it as default. There was some configuration where FFT wouldn't work, what was that again? We should add that information to the namoptions-overview doc.

There is a switch in modstartup that actually changes iwallmom and BCbotm to = 3 if the simulation is neutral (ltempeq = .false.) so this doesn't matter with the current code. Perhaps it is good practice to set it to 3 - or we forget about this option (= 3) in namoptions altogether as it is automatically dealt with in the code.

Hm that's a good question. What we definitely want to avoid using temperature in the wall functions if that is not intended, by simply having some leftover variables lying around in the namoptions... I suppose ltemeq = false is good enough as a check, as this would easily be spotted if set wrong. So I'd say we can remove iwallmom = 3 from the options (in the documentation) and let the code deal with it automatically.

Yes from what I can see by having a quick look at the code we do not need Tfacinit if iwallmom = 3 therefore if ltempeq = .false.. Perhaps we should edit the preprocessing to not write this file under these conditions - #60.

Yes let's do that and I will remove the files from the example directories.

Yeah, the 201 domain should probably increase as well. What would you recommend as domain sizes? When we know the size, I can make us a new random geometry for 502.

I spoke with Maarten about domain sizes and he seemed to think all the example simulations being pretty small was a good idea. "No larger than 128x128x128." So we could scale these up to that - or just leave as 64^3 and refine the blocks?

I changed the block width and height to 8 in 201, so for this it is probably okay. For the more realistic cases (LIDAR and random blocks) I think we could go for 128x128x128.

@tomgrylls do we set BCbotT = 2 in 101, or does it stay at the new default BCbotT = 1?

BCbotT will not be used as we have blocks at the lowest level. Perhaps we could actually edit 001 to use this 'bottom' functionality? All the BCbotx should only have any effects if we do not have blocks all over level kb.

Okay. Yes I think using the subroutine bottom in 001 is a good idea. This makes this example the most similar to the original DALES code.

bss116 commented 4 years ago

I have now updated the namoptions of the example files, removing the parameters that do not differ from default and I also tested the simulations. Most are working fine, but I have issues with 001 and 502. Unfortunately, due to the bug in the Mac compiler (#46), my backtrace still doesn't work properly and I cannot see exactly where the error is. @tomgrylls can you quickly check these two? If they crash then I did remove something essential and we should revert, otherwise I'll need to investigate further what's going on...

PS: removing the Tfacinit.inp files does not work even for neutral simulations, because the code still wants to read them in. We should change that at some point, but for now I leave them in the neutral examples setup.

tomgrylls commented 4 years ago

There was some configuration where FFT wouldn't work, what was that again?

I seem to remember FFT will not work for inflow-outflow momentum BCs and for non-equidistant x grids? In modstartup we have the following checks (it is not immediately clear from this about inflow-outflow... I can double check this and change the namoptions doc):

  if ((.not. inequi) .and. (ipoiss == POISS_CYC) .and. (.not. linoutflow)) then
     write(*, *) "WARNING: consider using FFT poisson solver for better performance!"
  end if

  if ((ipoiss == POISS_FFT) .and. (inequi)) then
     write(*, *) "ERROR: POISS_FFT requires equidistant grid. Aborting..."
     call MPI_FINALIZE(mpierr)
     stop 1
  end if

So I'd say we can remove iwallmom = 3 from the options (in the documentation) and let the code deal with it automatically.

Agreed.

For the more realistic cases (LIDAR and random blocks) I think we could go for 128x128x128.

Yes I think using the subroutine bottom in 001 is a good idea. This makes this example the most similar to the original DALES code.

Perhaps we should edit the preprocessing to not write this file under these conditions - #60.

Yes let's do that and I will remove the files from the example directories.

All on my to-do list.

I have now updated the namoptions of the example files, removing the parameters that do not differ from default and I also tested the simulations. Most are working fine, but I have issues with 001 and 502. Unfortunately, due to the bug in the Mac compiler (#46), my backtrace still doesn't work properly and I cannot see exactly where the error is. @tomgrylls can you quickly check these two? If they crash then I did remove something essential and we should revert, otherwise I'll need to investigate further what's going on...

PS: removing the Tfacinit.inp files does not work even for neutral simulations, because the code still wants to read them in. We should change that at some point, but for now I leave them in the neutral examples setup.

Okay I will check 001 and 502 too. Should be straightforward to not read Tfacinit I will look into this. I will try to do the above by the end of the day tomorrow.

tomgrylls commented 4 years ago

I am consistently getting the same error when doing the tests on 001 and 502 in debug mode. Line 192 in modpois.f90

dzhl(0:kmax+1) = dzh(kb-kh:ke+kh)

tries to access dzh(-1) which does not exist. Looking at where dzhl is used, there is a relatively straightforward fix to this. But am I doing something wrong as this problem was not arising on the POISS_FFT PR? Also I am getting this error for all example simulations now that ipoiss = 0 so this may not even be the same problem @bss116 had above?

bss116 commented 4 years ago

Some updates summarised: 1) iwallmom = 3 removed from options, this is done automatically by the code 2) 102 updated with smaller blocks, but resulted in bug #79 3) bug caused by iPOISS = 0 fixed in PR #76 4) found bug with lflat that requires requires building temperature to be set even for neutral simulations #77 5) suggestions for removing Tfacinit.inp file for neutral simulations #78 6) added new building file for 502

TODOs: 1) fix #79 2) re-run pre-processing for 201, 501, 502 with increased resolution and new building file for 502 (buildings.502) 3) implement #78 and remove facets.inp, walltypes.inp from 001, Tfacinit.inp from 001, 002 4) change 001 to use bottom subroutine instead of floor covered with large block 5) documentation for example simulations 6) change 102 to be a warmstart

and at some point we should fix #77.

tomgrylls commented 4 years ago

I have debugged a range of misc. issues found when running the set of example simulation on hpc in debug mode. I will make a PR for these changes soon. I have also swapped the block configurations of 502 and 201 as the energy balance preproc becomes very slow with complex, non-grid conforming geometries.

To add to the above TO-DOs I think we should make one of the example simulations a warmstart - this is something commonly done with the code and I am confident when I do this we will find one or two similar bugs in the warmstart section of modstartup.

bss116 commented 4 years ago

To add to the above TO-DOs I think we should make one of the example simulations a warmstart - this is something commonly done with the code and I am confident when I do this we will find one or two similar bugs in the warmstart section of modstartup.

Good idea! Should be one of the small ones, already for these the two init restart files are about 15 MB each. Should we just change 101 or 102 to be a warmstart, or do you want to set up a new one?

tomgrylls commented 4 years ago

I think either 101 or 102 would be fine! If you do this then keep in mind that for the scalar fields we have inits... files and to read these we have to set lreadscal = .true. If file sizes are a problem we could always just warmstart the other fields and not the scalar fields but perhaps is to do both so we have it as an example.

bss116 commented 4 years ago

Yes can do this. The scalars are not a problem, the inits are only 1 MB each

tomgrylls commented 4 years ago

102 now a warmstart. Fully developed, init files at 1000 s

bss116 commented 4 years ago

The horizontal domain of 501, 502 is now increased to 128 x 128 and driver files of 502 are from 1000 s run. I ran into issues when trying to increase also the vertical domain size and when trying to run 502 with iPOISS = 0, will document these in a separate issue.

tomgrylls commented 4 years ago

I am pretty sure non-periodic lateral BCs are not supported by the FFT Poisson solver so this is not a surprise. I had tried to find my notes on this but am struggling to get a definitive answer. But Tomas introduced the cyclic reduction scheme in order to run inflow-outflow simulation which suggests this is the case.

bss116 commented 4 years ago

Ok I see. It breaks somewhere in modtheromdynamics (will give you the exact line in a little bit). If it's the case that we do not support this setup, then we just need to make sure to include this in the startup checks, so it breaks with a meaningful error message.

tomgrylls commented 4 years ago

I was meaning to add exactly this chceck but wanted to first confirm somewhere in writing that this is the case.

bss116 commented 4 years ago

Okay that's a good idea. Just for the record, 502 with iPOISS == 0 breaks in modtherodynamics line 406: presf(k) = presf(k-1)*rdocp - grav(pref0*rdocp)dzh(k) /(cp*thvh(k)) This is a similar case to #77, just with index k instead of k-1 for dzh, thvh.

bss116 commented 4 years ago

I have now completed test runs of the example simulations. Results:

I suggest the following: We move on to documenting what the example cases are representing and the corresponding parameters/switches for it and merge this branch as soon as this is done. Before release 0.1.0 we fix the bugs documented in #78, #79 and whatever is going on on the cluster, and update the example input files accordingly. Do you agree, @tomgrylls?

tomgrylls commented 4 years ago

Yes agreed. Perhaps we should also just adjust the outputs of each simulation. For example, the simulation with infinite canyons should output ytdump, while others are better suited to tdump and/ or xytdump.

I am a bit confused about the 3D-temperature output fields, where some x or y slices have a uniform temperature distribution with different temperatures inside the blocks, and other slices show turbulent fields. But I am guessing this is alright...?

I think I know what you mean. If this is referring to what I think, then for all scalar fields (temp, moisture, passive scalars) we overwrite the values at the edge of the blocks to be equal to the adjacent cells to minimise the subgrid diffusion that may occur in the IBM. This is particularly important at startups of simulations. In that case this is to be expected. If not can you post a couple of screenshots?

bss116 commented 4 years ago

Yes agreed. Perhaps we should also just adjust the outputs of each simulation. For example, the simulation with infinite canyons should output ytdump, while others are better suited to tdump and/ or xytdump.

Yes sorry, forgot about that. Also runtimes could be adjusted. Maybe 50 or 100 s?

I am a bit confused about the 3D-temperature output fields, where some x or y slices have a uniform temperature distribution with different temperatures inside the blocks, and other slices show turbulent fields. But I am guessing this is alright...?

I think I know what you mean. If this is referring to what I think, then for all scalar fields (temp, moisture, passive scalars) we overwrite the values at the edge of the blocks to be equal to the adjacent cells to minimise the subgrid diffusion that may occur in the IBM. This is particularly important at startups of simulations. In that case this is to be expected. If not can you post a couple of screenshots?

I'm not quite sure what I am seeing here. Attached are scalar and temperature fields for 101. 101-thl 101-scalar

tomgrylls commented 4 years ago

I think the problem may be to do with the non-uniform colour scales in these plots - the values inside the buildings are changing the colour scales. I can double check if you send the .nc file. The middle plot does look slightly weird but I think is a result of what I posted above - again I can have a look.

bss116 commented 4 years ago

ah yes of course! you are right, with the colorbar capped between 285-288 it looks alright. But what about the temperature of 0 K inside the blocks...? 101-thl-54 101-thl-cap

tomgrylls commented 4 years ago

This is because bldT default is 0. That's one of the reasons I think just changing this default is not such a bad idea in #77. This plotting issue will also be resolved in the future in #57, ideally we then set the value in the blocks to the dummy variable and they are then read as NaN.

bss116 commented 4 years ago

Ok I see. then regardless of whether we change the default of bldT, we should move it out of the energybalance section and move it into WALLS. I was not aware that this is being used outside the energybalance. we could even give it a default value of -1, to make sure this value is set by the user?

tomgrylls commented 4 years ago

It's value does not matter (if lEB = .false.) - there is just a line at the initialisation stage of modibm that sets the 'internal' temperatures to bldT. I think this line was intended to avoid the SGS diffusiom through the walls I discussed above. But regardless we now do this in a better way by updating the temperatures at the edge of the buildings at every time step. I am pretty confident we can take this line out but equally I think it has no effect. There is a small chance it will in the first iteration of the first time step but then a different value to bldT would be preferable anyway.

bss116 commented 4 years ago

Alright. Let's leave the decision to #77 then, I'll note that we also discussed this here.

I'll get going with the documentation in the next few days, will keep you posted!

bss116 commented 4 years ago

@tomgrylls quick question: is it on purpose that in 201 the temperature is determined by wall functions (iwalltemp = 2), but not the moisture?

bss116 commented 4 years ago

I added now a documentation for the example cases. Please have a look and feel free to make any changes to form and content.

tomgrylls commented 4 years ago

@tomgrylls quick question: is it on purpose that in 201 the temperature is determined by wall functions (iwalltemp = 2), but not the moisture?

Hmm no I think moisture should be wall functions too. Think that is a mistake, but I have never run any simulations using. the energy balance. I guess we should really have checks on for the energy balance that both of these are set to 2?