Closed gforney closed 6 years ago
Let's keep the MPI calls needed by SCaRC in scrc.o for the moment. I just committed
changes to the makefile so that mpis.f90 or mpip.f90 will be compiled before all the
other routines. This should allow scrc.f90 to USE MPI. Let me know if it does not.
Original issue reported on code.google.com by mcgratta
on 2010-06-21 12:19:49
Do a quick check on one of your cases -- print out I_MIN, etc, and report the values
here. It appears that they extend into the diagonal ghost cells.
Original issue reported on code.google.com by mcgratta
on 2010-06-21 13:11:22
The following lines are most probably an issue for Glenn (?). Just for your information:
I work on a Macintosh system (2.66 GHz Quad-Core Intel Xeon, 8 GB 1066 MHz DDR3, Mac
OS X Version 10.6.3, operating system still adjusted to 32bit) with a gnu-compiler
gcc-4.3.3. In order to get ScaRC compiled within the official repository, I added
a few files/directories to FDS_Compilation. For the moment, I did all these changes
only in my personal copy, because I would like to make sure that everything works correctly
before I check in officially. It would be great if you examined this before.
- I added a directory mpi_gnu_osx_32 to FDS_Compilation with a corresponding make_fds.csh,
please see attachment.
- Furthermore, I added a new script set_gnufort_osx.csh to Scripts, see attachment.
Most probably, this cannot be used generally because it contains my personal variant
for path.
- Finally, I added the following lines to the makefile:
mpi_gnu_osx_32 : FFLAGS = -O2 -ffree-line-length-256
mpi_gnu_osx_32 : CFLAGS = -O2 -D pp_OSX
mpi_gnu_osx_32 : FCOMPL = mpif90
mpi_gnu_osx_32 : CCOMPL = mpicc
mpi_gnu_osx_32 : obj = fds5_mpi_gnu_osx_32
mpi_gnu_osx_32 : setup $(obj_mpi)
$(FCOMPL) $(FFLAGS) -o $(obj) $(obj_mpi)
With those settings, the compilation of the original code with gnu-32bit on a Mac now
starts.
But I get some error messages related to the dump.f90 routine based on some format
statements for which the gnu-compiler seems to be more strict than the intel-compiler.
Sorry, the error messages are only in German, but it means that there is a non-negative
width in the format statement.
../../FDS_Source/dump.f90:1589.107:
CT ',1,1 !terrain slice assumes one mesh and puts level set data on terrain
1
Fehler: Nicht-negative Breite benötigt in Formatzeichenkette bei (1)
../../FDS_Source/dump.f90:1602.27:
WRITE(LU_SMV,'(1X I)') 1
1
Fehler: Nicht-negative Breite benötigt in Formatzeichenkette bei (1)
../../FDS_Source/dump.f90:1604.28:
WRITE(LU_SMV,'(1X I)') 12
1
Fehler: Nicht-negative Breite benötigt in Formatzeichenkette bei (1)
../../FDS_Source/dump.f90:1605.42:
WRITE(LU_SMV,'(1X A,3I)') 'R=',G%RGB(1)
1
Fehler: Nicht-negative Breite benötigt in Formatzeichenkette bei (1)
../../FDS_Source/dump.f90:1606.42:
WRITE(LU_SMV,'(1X A,3I)') 'G=',G%RGB(2)
1
Fehler: Nicht-negative Breite benötigt in Formatzeichenkette bei (1)
../../FDS_Source/dump.f90:1607.42:
WRITE(LU_SMV,'(1X A,3I)') 'B=',G%RGB(3)
1
Fehler: Nicht-negative Breite benötigt in Formatzeichenkette bei (1)
make: *** [dump.o] Error 1
I changed the corresponding lines in dump.f90, please see attachment and have a look
at lines 1589, 1602-1607. Do you agree with those changes? If you want me to make changes
like this (not related to ScaRC) for my own, please let me know.
Based on the upper new files the original fds5_mpi_gnu_osx_32-version now compiles
without any problem.
To include ScaRC, I added some more dependencies to the makefile, see attachment (search
for 'scrc' ...). Now, it seems to be possible to compile ScaRC within the official
code. I'll play around with that now.
Thanks a lot in advance for your efforts
Susan
Original issue reported on code.google.com by kiliansusan
on 2010-06-22 08:49:40
Yes, commit the changes to dump.f90. I believe the output format should be written like
this:
WRITE(LU_SMV,'(1X,A,I3)') 'B=',G%RGB(3)
Use commas instead of spaces in the format statement because it is easier to read.
Just remember to update the repository before you commit the changes, just in case
someone else has made changes recently.
Original issue reported on code.google.com by mcgratta
on 2010-06-22 12:12:07
Okay, I'll commit the changes to dump (and do an update before ...).
Original issue reported on code.google.com by kiliansusan
on 2010-06-22 12:36:37
Before I will officially commit the changes related to ScaRC, first some general information:
It should be possible to call ScaRC by using SCARC_METHOD=1 in the &PRES-line. As a
default, SCARC_METHOD is set to 0 which automatically calls the original FFT-method.
SCARC_METHOD=1 refers to the conjugate gradient variant of ScaRC, ScaRC-CG, which is
the base of all my former test cases. This method is not suited for complex geometries
and doesn't converge very fast as you already know. But it was a good way to verify
the basic correctness of the concept. (Starting with the costly implementation of multigrid
as long as it wasn't clear that the concept works at all, seemed to be too risky to
me.) SCARC_METHOD=2 will call the final multigrid variant, ScaRC-MG, which will most
probably be much more robust and efficient than ScaRC-CG. But this variant isn't enabled
at this moment, I am working hard on it ... At a later time we can also choose another
mechanism to call FFT or ScaRC if you like ... probably something like SOLVER='FFT'
or SOLVER='SCARC' in the &PRES-line, it's up to you.
In order to integrate ScaRC into the code I have to include some lines into the existing
routines. All of them can be found by searching for the string 'SCARC':
* main.f90 :
- basic initialization of ScaRC
- at the moment there are another 2 ScaRC-related lines: the routine MATCH_VELOCITY
shouldn't be necessary in case of ScaRC, because if everything works properly, the
consistency is automatically guaranteed. Therefore, it's only called in case of SCARC_METHOD=0
now. Again, we can change that in a way you prefer ... But, at the moment, this IF-statement
has a negative side-effect. Randy told me that some variables related to the FDS6-option
are set in MATCH_VELOCITY such that the ScaRC-variant wouldn't be able to use those
values because MATCH_VELOCITY isn't called yet. For the moment, this might be okay
but we should find a better solution to guarantee that MATCH_VELOCITY isn't called
for ScaRC however the variables will still be available.
* read.f90 :
- read in of ScaRC-related variables, most of them are only important internally.
For the user only one setting is important, namely the upper mentioned setting of SCARC_METHOD.
At later times there will be some more important variables with respect to the multigrid
smoother.
* divg.f90:
- During your visit to Germany, I comprehensively discussed with Randy the possibility
to have the same right hand side (RHS) for the pressure solver no matter if a single-
or multi-mesh geometry is used. In order to ensure that the same RHS is computed, Randy
told me to perform the last statements in DIVERGENCE_PART_2 only in case of the original
FFT-method. Therefore, I introduced an if-statement which doesn't perform these lines
if SCARC_METHOD>0.
* pres.f90:
- This is the basic routine for the call of ScaRC.
- Besides, there is a new subroutine for the pointwise setting of the boundary
values on the ghost cells (in contrast to the facewise setting which is originally
used after the call of the FFT-solver).
Randy: This will no longer be the ScaRC-version which we used during your visit to
Germany. I upgraded to the new version of ScaRC which already is based on the new grid
hierarchy structure. In this version, the conjugate gradient method is isimply regarded
as a special case of the multigrid method, only using the finest grid level (and not
the whole grid hierarchy). I'll redo all the tests which we did in common, now based
on the new structure, and will upgrade to the multigrid as soon as possible. I'll tell
you if it is ready for tests.
If you agree I'll commit those changes soon.
Bye,
Susan
Glenn: Would you like to stay as 'CC' or isn't that necessary ?
Original issue reported on code.google.com by kiliansusan
on 2010-06-22 12:46:26
Sorry, I forgot one routine where I did a very small change:
* cons.f90:
- I justed introduced a new parameter INTERNAL=3 in addition to the existing
'DIRICHLET=1,NEUMANN=2' values for the pressure boundary conditions because I have
to distinguish external and internal boundaries. If you prefer another name, please
let me know ...
Original issue reported on code.google.com by kiliansusan
on 2010-06-22 12:53:06
Susan,
If you don't use MATCH_VELOCITY then your FVX, etc. will not be the same for each mesh
at the mesh interface. I agree, in the end the matching is not necessary. And if
you are stable without it, then fine. But you should be careful about throwing it
away. It may speed your convergence. I think it makes sense to start each iteration
(even for ScaRC) with a consistent FVX. However, what you do not need to do is to
include the divergence correction, D_CORR. My recollection was that this is what was
causing problems.
R
Original issue reported on code.google.com by randy.mcdermott
on 2010-06-22 12:56:18
Randy, I'll check the consistency of the FV. very carefully and will keep that point
in my mind. Indeed, ScaRC should work without MATCH_VELOCITY. So, if you like I leave
the calls of MATCH_VELOCITY unchanged for the moment and only play around with that
in my personal copy ...
Original issue reported on code.google.com by kiliansusan
on 2010-06-22 13:04:18
ScaRC may work without MATCH_VELOCITY. But it is impossible that FVX will be consistent
on each mesh (for a given iteration) if you skip MATCH_VELOCITY. Because if you skip
it and the FVX match, then your pressure problem is already solved.
Original issue reported on code.google.com by randy.mcdermott
on 2010-06-22 13:08:56
Okay, then I misunderstood this point. I'll leave that unchanged and recheck the related
procedure for ScaRC ...
Original issue reported on code.google.com by kiliansusan
on 2010-06-22 13:29:08
I just commited dump.f90 with the reworked format statements. At the moment it would
be a relief for me if you checked that everything compiles properly.
Concerning the commit of the ScaRC-related routines I'll wait till I get your okay
concerning the new makefile-structure.
Kevin: I'll send you the I_MIN, ... values as soon as I will have checked the upgraded
ScaRC-version ...
Original issue reported on code.google.com by kiliansusan
on 2010-06-22 13:47:55
It compiles fine. We're going to release FDS 5.5.1 tomorrow. I suggest we wait until
we release to add scrc.f90, just in case something happens. I would rather have a few
weeks where we can all compile and work with the new code before releasing it.
Original issue reported on code.google.com by mcgratta
on 2010-06-22 15:15:24
I totally agree! Because I will take part at two different meetings at Thursday and
Friday, I won't be at work from tomorrow on for the rest of the week. I'll use the
time during many hours of train ride to check the upgraded ScaRC meanwhile. Good luck
with the new release.
Susan
Original issue reported on code.google.com by kiliansusan
on 2010-06-22 15:35:57
Hi,
just for your information: The first serial multigrid-method seems to run. For my standard
test case it convergences with rates about 0.1 (instead of 0.7-0.8 for the cg-variant).
I am just playing around with different smoothers (Jacobi, Gauss-Seidel, SSOR). My
aim is to still implement a GS-ADI smoother which is very well suited for linewise
numbered grids. Once this smoother will run I will analyze the timings for the whole
method. And once the complete serial multigrid seems to be optimized, I'll continue
all the parallel stuff and check the parallel mg-variant, ok ?
Many regards,
Susanne
Original issue reported on code.google.com by kiliansusan
on 2010-07-05 14:46:44
Good to hear. Were you able to commit your version of FDS to the Repository?
Original issue reported on code.google.com by mcgratta
on 2010-07-05 15:34:12
Not yet, because I spent all the time to update to the new multigrid version. So, I
will commit it in the middle of the week. To get it to run I'll have to change the
makefile and some other files in FDS_Compilation according to my Comment 3. Am I allowed
to to those changes for my own or does Glenn want to do this stuff?
Original issue reported on code.google.com by kiliansusan
on 2010-07-05 15:56:48
If I understood your comment 3 correctly, then you added an entry to the FDS makefile
to get your new FDS stuff to work. So, other FDS makefile entries should work as before.
So I don't feel the need to do this myself. Besides if there is a problem we can
always revert to an earlier revision.
Original issue reported on code.google.com by gforney
on 2010-07-05 16:10:08
Okay, thanks. Then I'll do that as soon as possible.
Original issue reported on code.google.com by kiliansusan
on 2010-07-05 16:15:14
Because the new commit of ScaRC concerns a lot of different files, it's most probably
better if you do some tests for your own. As explained in comments 3, 6-7, there are
(small) changes in main.f90, cons.f90, read.f90, pres.f90, divg.f90, makefile as well
as a new mpi_gnu_osx_32 and Scripts/set_gnufort_osx.csh.
The currently commited scrc-version is completely new, it's based on my new multigrid-structure
and differs a lot from the formerly version. At the moment it's very experimental
and not suited for tests, it won't work for most cases... I would like to test different
parts of it before you can do some tests with it for your own.
Yesterday, I was able to implement the GSTRI-smoother as announced before (it is based
on the inversion of the lower triangular part plus the first upper diagonal of the
system matrix ). This is a first step on the way to the final GSADI smoother which
also includes the different coordinate directions. GSTRI seems to run but I have to
test more examples to be sure ...
Besides, I included a lot of time measurements into the different ScaRC-routines. I
think they will be helpful in the future.
Many, many thanks for your comprehensive support today
Original issue reported on code.google.com by kiliansusan
on 2010-07-08 17:39:03
Just for your information:
Meanwhile, I implemented a variety of different serial solvers for the pressure equation:
standard cg-method, bicgstab-method (seems to be very efficient!), multigrid-method
... all of them with different preconditioning/smoothing techniques. It's also possible
to use a multigrid method as preconditioner for a cg-method, at former times this was
the most robust and efficient variant. Now, all necessary structures for the use of
a hierarchy of grid levels are available.
We could also think of using transfer operators only for embedded meshes, e.g. using
the multigrid as stand-alone method on two levels (the global standard-mesh and some
finer embedded meshes) or even only as preconditioner for bicgstab in the sense of
multilevel adaptive methods, such as e.g. the fast adaptive composite grid method FAC.
There is a nice introduction to this method in 'A Multigrid Tutorial' from Briggs,
Henson, McCormick.
I'll start now with the parallel tests. As you know, the parallel cg-method already
worked in the past (2D and 3D). So, I'll check the parallel functionality of the bicgstab
and multigrid variant.
Before I commit again, I'll redo some basic tests ...
Susan
Original issue reported on code.google.com by kiliansusan
on 2010-07-14 15:42:14
Susan -- I have made some improvements to the iteration scheme for solving the pressure
equation. The basic idea is that we solve for an "average" correction pressure for
each mesh. This was the first step in the original "PRESSURE_CORRECTION" scheme. It
helps drive the iteration scheme faster towards a desired tolerance. I'd like to begin
testing these schemes, and I want to know what you use as a metric for accuracy? I
have been setting a maximum difference in the normal velocity at a interpolated cell
interface. Then I monitor the number of pressure iterations required to meet this tolerance.
A good way forward is to set up a new folder under Verification called "Pressure_Solver".
There we can put standard test cases. Before doing this, however, it would be good
to have a common metric of accuracy. Any ideas?
Original issue reported on code.google.com by mcgratta
on 2010-07-15 14:13:48
Kevin, let me get familiar with your new version of the pressure correction scheme.
The maximum difference criterion in the normal velocity component seems reasonable
for me. Do you intend to use the PRESSURE_TOLERANCE for setting up the desired accuracy
? At the moment, I just measure the residual in the euclidic norm for the pressure
solver itself. There shouldn't be a difference in the normal velocity components at
internal boundaries.
As already discussed with Randy, I will set up a test case for ScaRC as soon as I will
have made sure that the first parallel tests with my new multigrid structure work.
A new Verification folder 'Pressure_Solver' is indeed very suitable for all this stuff,
also for the comparison with your new pressure scheme.
Another question: Did you ever think of the (optional) introduction of a master process
on an extra processor? Not necessarily for FDS6, but at a later time ... I already
implemented structures for that. At least for the multigrid pressure solver, this master
would be very advantageous and would most probably reduce the global execution time.
Processors are getting cheeper and cheeper. Probably, this structure could also be
advantageous for other 'global' stuff in the code which is done on MYID=0 at the moment.
Original issue reported on code.google.com by kiliansusan
on 2010-07-15 17:49:20
VELOCITY_TOLERANCE will force FDS to iterate the pressure solver to achieve the desired
velocity error. It is the maximum error of the normal components of interface velocities.
If you also say PRESSIT_ACCELERATOR=.TRUE., the extra global linear solve is also turned
on. I have not tested this much, so I have set it off by default.
I added Pressure_Solver to the Verification folder.
No, I have not thought about the master process. We have always worked under the assumption
that the FDS users would typically have access to a cluster of computers, all with
similar speed and status. I guess this is the socialist approach, as opposed to master-slave.
Original issue reported on code.google.com by mcgratta
on 2010-07-15 18:10:57
I just added a few cases to Pressure_Solver. They were in other folders, but their primary
purpose is to test the pressure solver. They are
dancing_eddies -- 2D flow over a baffle
duct_flow -- isothermal flow through a complicated duct
hallways -- a fire in a series of adjacent hallways
Only duct_flows has some quantitative output. The others are just for qualitative testing.
Original issue reported on code.google.com by mcgratta
on 2010-07-15 18:29:39
Sounds great. So, I'll test the different variants of my solver for all those test cases
as soon as possible.
What about a possible combination of the locally refined meshes with the multigrid
structure?
Original issue reported on code.google.com by kiliansusan
on 2010-07-15 19:22:14
I am not sure what you mean.
Original issue reported on code.google.com by mcgratta
on 2010-07-15 20:10:14
It's the idea of adaptive multigrid. A very good reference is the book 'Multigrid' from
Trottenberg, Oosterlee and Schüller, see from page 356 on, http://books.google.de/books?id=-og1wD-Nx_wC&printsec=frontcover&dq=Trottenberg+Multigrid&source=bl&ots=sjYazAYWJd&sig=tNzeNpByUaRHdaJubL4rsB77t6g&hl=de&ei=YEVATMuGA9GCOJ3nrZMN&sa=X&oi=book_result&ct=result&resnum=2&ved=0CCYQ6AEwAQ#v=onepage&q&f=false.
Following e.g. figure 9.3 or 9.6 therein, it's possible to perform the multigrid on
one global grid with additional locally refined grids.
Original issue reported on code.google.com by kiliansusan
on 2010-07-16 11:45:59
I can only access parts of this book. But I'll look for it in the library.
Original issue reported on code.google.com by mcgratta
on 2010-07-16 13:10:21
Thanks, I really think it's worthwhile. If you have a look to this book in the library,
then it could be advantageous to also have a look to 'A Multigrid Tutorial' from Briggs/Henson/McCormick.
Original issue reported on code.google.com by kiliansusan
on 2010-07-16 13:36:16
Both books are on my shelf.
Original issue reported on code.google.com by randy.mcdermott
on 2010-07-16 14:06:28
That's my librarian.
Original issue reported on code.google.com by mcgratta
on 2010-07-16 14:08:21
Whow, I am impressed :-)
Original issue reported on code.google.com by kiliansusan
on 2010-07-16 14:09:59
So, in both books, chapter 9 is responsible for the 'adaptive' stuff.
I already implemented all the multigrid-structures for a 'global' hierarchy of grids,
that is, several grids which all cover the same domain. The step forward to transfer
operators which are only related to locally refined grids should be straightforward.
Original issue reported on code.google.com by kiliansusan
on 2010-07-16 14:14:47
Just some information about the current state: As already mentioned, I implemented several
different solvers: conjugate-gradient-method, bicgstab-method, multigrid-method and
combinations of them (cg oder bicg with mg-preconditioning).
For serial problems, all those solvers work very well and produce the same result than
the corresponding FFT-solver (checked with smokediff).
Meanwhile, I also tested some multi-mesh problems in 2D and all those solvers seem
to run correctly also in parallel, also the 2D-multigrid! Before I'll start with 3D-problems
shortly (cg already worked in 3D before the restructuring of the code), I'll first
do some more complicated tests in 2D.
As soon as the open compiling issues related to 'stand f95' are clarified, I'll commit
the new code ...
I also spent some time for a better understanding of the Fast Adaptive Composite Grid
Method (FAC) as explained in the books above and it seems to be very suited with regard
to the embedded meshes to me. What do you think about it?
Original issue reported on code.google.com by kiliansusan
on 2010-07-22 14:39:19
The last time I talked to my friend at NYU, Boyce Griffith, who uses the SAMRAI framework
for 3D adaptive mesh refinement simulations of blood flow through the heart, he recommended
using FAC as a precondition for a Krylov method. I will send you the email.
Original issue reported on code.google.com by randy.mcdermott
on 2010-07-22 14:44:38
Thank you very much ... I just had a short look to this mail, but 'll read it much more
carefully soon.
But, at a first sight, it contains just the same structure of code that I described
above. In my former ScaRC, I used a Krylov method as a data-parallel global solver
with multigrid as preconditioner. I also was able, to use multigrid as stand-alone
solver, but the combination cg-mg was much more robust. It seems to me that Mr. Griffith
has the same experience. Instead of GMRES I only used BICG-stab, and indeed, the combination
BICG-stab with multigrid was the most efficient one.
The difference to the described FAC method was that I formerly only used global grids,
with different resolutions, but all for the whole domain. In FAC you can also use local
(embedded) meshes with finer resolutions. But you also have a very similar surrounding
multigrid structure with the same components (smoother, transfer operators, ...) as
I already have.
So, we could give it a try!
Original issue reported on code.google.com by kiliansusan
on 2010-07-22 15:00:20
All,
this discussion is very interesting!
Because there are many discussions about the "best" solver and the "best" preconditioner,
I recommend this PHD-Thesis:
http://ses.library.usyd.edu.au/handle/2123/376
It is the best PhD Thesis I found about solvers and their parallelization in the www.
If you all have it already, it's OK. The solvers described in the Thesis can also be
downloaded as a Fortran Source Code.
Here the link:
http://www.engineers.auckland.ac.nz/~snor007/software.html#sixpack
Regards,
Christian
Original issue reported on code.google.com by crogsch
on 2010-07-22 16:22:43
Christian: Many thanks for following this discussion and the hint about the PhD Thesis,
I'll have a look at it and also at the code.
Randy: Concerning the mail of Mr. Griffith it could also be a good idea to use FFT
as preconditioner for CG oder BICG. I had this idea long ago, but didn't think of it
recently. This could give a very strong global coupling due to the global scalar- and
the matrix-vector-products. But surely, this will take longer than the pure execution
of local FFT's. And probably, only one preconditioning step with local FFT's could
be too less to get the global residual under a certain accuracy.
At the moment, I cannot estimate if this FAC-solver will be in conflict with the cell-centered
approach, as Mr. Griffith supposes. My former ScaRC also worked only for node-based
discretizations. I don't know if it could be a possibility to interpolate the pressure
values to the nodes just to be able to have a node-based solver here ?
But using a BICG-method you don't have the need to use a symmetric preconditioner,
so it should work even with an unsymmetric FAC. But these are only my assumptions at
the moment, I would have to check it.
I totally agree with Mr. Griffith that - if you use multigrid as a preconditioner -
you have a certain freedom in the choice of the local grids. That corresponds to my
experience, too. It's possible to hide local irregularities completely within a single
grid and resolve it by the local mg-power.
So, that's indeed a very interesting topic.
Original issue reported on code.google.com by kiliansusan
on 2010-07-22 17:12:25
As you know I would like to have consistency of all the program parts surrounding the
pressure solver. That means, that all parts should lead to the same values of the relevant
vectors in the best case, no matter, if we have a 1-mesh-division or a corresponding
nxm-mesh-subdivision for the same domain. This is a very important point for all variants
of my solver. Only in this case I really will be able to omit all kinds of diagonal
communication.
At the moment I am just performing different tests for my favorite geometry. Please
see the attached directory where I have combined a few 2D-tests for a square domain
with an inflow at the bottom. For this domain I use a 1-mesh version as well as a 2-mesh
version (2x1 subdomains) and a 4-mesh version (2x2 subdomains) to have a real internal
diagonal combination of submeshes. At the moment all computations are based on a very
coarse resolution because this makes it easier for me to make sure that the consistency
of all parts is accomplished. But we can also use finer resolutions at a later time
... If you don't mind I would like to add these geometries as 2D-verification cases
for my solver (later on also in 3D).
Now, I am wondering why the initial setting for M%DT is different for the 1-mesh and
the corresponding 2x2-case, although the grid resolution is the same for both. This
point was a bit confusing for me because I got different values in DIVERGENCE_PART2
for DP along the obstruction (for IW=163 and IW=164 in the 1-mesh-case and IW=49 and
IW=50 in mesh 1 of the 2x2-mesh-case).
I see that the initial M%DT is set in read.f90 at about line 970 where the difference
of M%ZF-M%ZS is used to define VEL_CHAR which is indeed different for the 1-mesh and
2x2-mesh case. Most probably that doesn't matter because the time step size is tuned
in the further course of the method. Is there a special reason to do it in this way
(in this case different for 1- and 2x2-mesh)?
To check the consistency of the different program parts (no matter of the subdivision)
I used the same M%DT for all different subdivisions, just for my personal testings,
i.e. I took the 'serial' initial setting also for all nxm-cases. Doing this I ended
with the same DDDT in pres.f90 and correspondingly the same PRHS which is used as right
hand side for the pressure solver. All in all I have consistency of the whole program
run for the cg-, bicg- and mg-version up to rounding errors in the range of 1-16. I
checked all my 1mesh-variants with the corresponding 1mesh_fft by means of smokediff.
For all test cases in the attached geometry-file directory I used a simple Jacobi-preconditioning/smoothing
because this produces the same results in the serial and parallel versions. Much more
efficient is the SSOR-preconditioning. The only reason that I don't use SSOR in the
example geometries is the fact that in the parallel case a block-ssor-preconditioning
doesn't produce the same result as a ssor-preconditioning on the whole domain. So,
only Jacobi is really suited for the consistency tests, but surely not for later runs.
Please do not run the attached examples with the currently commited version of ScaRC.
I have to commit a new version at first to get the mg-version to run. I'll commit
it as soon as the standard fortran95 compiling issues are resolved.
Original issue reported on code.google.com by kiliansusan
on 2010-07-26 13:42:05
The M%DT is a leftover from the original one mesh version of FDS. We choose DT based
on a "guess" of the characteristic velocity based on the height of the compartment
(ZF-ZS). Most of the early FDS calculations were a single compartment with the floor
and ceiling at the lower and upper bounds of the domain. We should think about a better
way to set DT.
Original issue reported on code.google.com by mcgratta
on 2010-07-26 13:49:25
Susan,
This is a nice catch. I think the initial time step should indeed be the same for
all meshes. The initial time step turned out to be a culprit in some other problems
I was having with FDS6 stuff. I will try to get this sorted out. Thanks.
Randy
Original issue reported on code.google.com by randy.mcdermott
on 2010-07-26 13:50:22
Thanks very much to both of you, this will make my tests much easier ...
Original issue reported on code.google.com by kiliansusan
on 2010-07-26 13:54:18
Meanwhile, I identified another piece of code in velo.f90 which is only related to FFT
and shouldn't be performed in case of ScaRC. (Randy: We already found these few lines
in divg.f90 during your visit in Germany which is only related to the FFT-version.)
Now, it is related to the beginning of the NO_FLUX routine ' Exchange H at interpolated
boundaries' which mustn't be done for ScaRC because I already have the correct ghost
values for H and HS when calling this routine and these new settings make it wrong
again. I checked this in analogy to the corresponding serial case and indeed, I have
the same and correct values in the parallel case there without performing these lines.
I was wondering why those values got slightly wrong during the setting of FVX, ..,
FVZ in the calling VELOCITY_FLUX routine... I would like to use an IF-statement and
perform those lines only if SCARC_METHOD == 'FFT'. Do you agree? That doesn't change
the functionality of the original code and corresponds to the change in divg.f90
I also added a test case where 2 meshes meet 1 mesh based on the upper test cases from
Comment 40 and everthing seems to work fine. Additionally, I am just setting up corresponding
3D-cases. I'll send you all this stuff soon.
Original issue reported on code.google.com by kiliansusan
on 2010-07-28 11:07:45
Be careful with SCARC_METHOD=='FFT'. Suppose that you change the default value of SCARC_METHOD.
Then, when we are doing a calculation with no SCARC solver, these lines of code will
not be executed. Make sure that there is no chance that the code changes functionality
if no SCARC option is turned on.
Original issue reported on code.google.com by mcgratta
on 2010-07-28 11:59:30
Kevin, I see your point and I agree. I would feel much better if we introduced some
new parameter in the &PRES-block which chooses the pressure solver and whose initial
setting is done within the official routines. It could have only two possible values,
FFT and ScaRC. So, I could choose the special variant of ScaRC (CG, MG, ...) only if
this parameter is set correspondingly. Because the string 'PRESSURE_SOLVER' is already
used for the routine itself we could probably use PRES_METHOD or something like that.
What do you think about that?
Original issue reported on code.google.com by kiliansusan
on 2010-07-28 12:25:00
PRES_METHOD is OK. What shall we call the current method of solving the pressure? Maybe
"DIRECT_FFT"? Then we can call the others:
SCARC_FFT
SCARC_MG
etc
I'd like to use SCARC in the names to easily identify the new features.
Original issue reported on code.google.com by mcgratta
on 2010-07-28 12:38:03
I see two possibilities:
1) We could have only 'FFT' and 'SCARC' for PRES_METHOD. Only if PRES_METHOD='SCARC',
we choose the corresponding variant of SCARC by an additional setting of SCARC_METHOD='CG'
or 'MG' or 'CG-MG' ... To my opinion, this variant would have the advantage that in
the official code you only have the coarse classification in both methods, no matter
what additional variants of ScaRC I will develop at later times (this would only happen
in the scrc-routine then). Probably, we could add 'FAC' as another solver at later
times, see upper comments.
2) We could just use your proposed notations from comment 47, that sounds good to me.
Concerning ScaRC they would correspond to the current settings SCARC_METHOD='CG', 'MG'
...
In every case I already use structures beginning with SCARC to have your desired identification.
Every variable and every routine which is accessible outside of scrc.f90 begins with
'SCARC_'
Original issue reported on code.google.com by kiliansusan
on 2010-07-28 13:10:24
Let's do the first option. Wherever possible, use
IF (PRES_METHOD=='SCARC') THEN
ELSE
ENDIF
Original issue reported on code.google.com by mcgratta
on 2010-07-28 13:19:29
Fine, sounds good :-), I'll overwork that
Original issue reported on code.google.com by kiliansusan
on 2010-07-28 13:22:15
Original issue reported on code.google.com by
kiliansusan
on 2010-06-21 11:33:21