ARudik / feelpp

Automatically exported from code.google.com/p/feelpp
0 stars 0 forks source link

Seamless parallelism #33

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
In standard parallel code that do not require anything fancy regarding the 
WorldComm, make sure that all objects are defaulted to the same WorldComm
The Environment should provide this object throughout the execution of the 
Feel++ code

Original issue reported on code.google.com by christop...@feelpp.org on 15 Jun 2012 at 3:42

GoogleCodeExporter commented 9 years ago
This issue was updated by revision 117e6ce21b2d.

initialize WorldComm using Environment::worldComm()
the worldComm should now always use the Environment one in order to ensure 
consistency and the same processor numbering

Original comment by christop...@feelpp.org on 15 Jun 2012 at 4:10

GoogleCodeExporter commented 9 years ago
This issue was updated by revision ba8fe8d3ca88.

now we don't have to pass the mesh itself to send/recv
we can pass the shared_ptr thanks to the new Environment::worldComm()
extend the exchange of meshes for any number of processors
 0 -> 1
 k -> k+1
 n -> 0

Original comment by christop...@feelpp.org on 15 Jun 2012 at 4:10

GoogleCodeExporter commented 9 years ago
This issue was closed by revision 3ecd9e4b874c.

Original comment by christop...@feelpp.org on 15 Jun 2012 at 3:53

GoogleCodeExporter commented 9 years ago
This issue was updated by revision 8c180ca10d51.

remove number of partition (automatically computed for single space)

Original comment by christop...@feelpp.org on 15 Jun 2012 at 3:53

GoogleCodeExporter commented 9 years ago
This issue was updated by revision aa7624ac358c.

now stokes executes (almost: only two lines are necessary) seamlessly in
parallel (see createSubMesh and look at _partitions= and _worldcomm that are
required for mixed spaces)
Stokes using lagrange multipliers (P0 continuous, the constants) to ensure
zero mean pressure in parallel does not
yet work, but should be very soon. 

the program executes in two flavors 

 * without lagrange multipliers (parallel), try the following command

 mpirun -np 8 ./feel_doc_stokes  --hsize=0.1 -ksp_constant_null_space 1
 -pc_type asm -ksp_monitor 

 this will execute  the example on 8 processors: 4 processors for the velocity
 and 4 for the pressure
 -ksp_constant_null_space 1 ensures the krylov solver doesn't elements in the
 null space of the operator
 -pc_type asm : uses the additive schwartz method as global solver

 * with lagrange multiplier: not yet in parallel and should yield the same
   results as before

Original comment by christop...@feelpp.org on 15 Jun 2012 at 3:53

GoogleCodeExporter commented 9 years ago
This issue was updated by revision 9487e1b3d469.

initialize WorldComm using Environment::worldComm()
the worldComm should now always use the Environment one in order to ensure 
consistency and the same processor numbering

git-svn-id: 
svn+ssh://forge.imag.fr/var/lib/gforge/chroot/scmrepos/svn/life/trunk/life/trunk
@9114 18f2cc81-8059-4896-b63e-f2d89ec8fd72

Original comment by christop...@feelpp.org on 15 Jun 2012 at 3:56

GoogleCodeExporter commented 9 years ago
This issue was updated by revision 8c8a308df2ac.

now we don't have to pass the mesh itself to send/recv
we can pass the shared_ptr thanks to the new Environment::worldComm()
extend the exchange of meshes for any number of processors
 0 -> 1
 k -> k+1
 n -> 0

git-svn-id: 
svn+ssh://forge.imag.fr/var/lib/gforge/chroot/scmrepos/svn/life/trunk/life/trunk
@9115 18f2cc81-8059-4896-b63e-f2d89ec8fd72

Original comment by christop...@feelpp.org on 15 Jun 2012 at 3:56

GoogleCodeExporter commented 9 years ago
This issue was updated by revision 972e825f116e.

fix for sequential case

Original comment by christop...@feelpp.org on 16 Jun 2012 at 9:18

GoogleCodeExporter commented 9 years ago
This issue was closed by revision a7d2d96d5e3b.

Original comment by christop...@feelpp.org on 16 Jun 2012 at 9:22

GoogleCodeExporter commented 9 years ago
This issue was updated by revision 28771dd32fb5.

remove number of partition (automatically computed for single space)

git-svn-id: 
svn+ssh://forge.imag.fr/var/lib/gforge/chroot/scmrepos/svn/life/trunk/life/trunk
@9121 18f2cc81-8059-4896-b63e-f2d89ec8fd72

Original comment by christop...@feelpp.org on 16 Jun 2012 at 9:22

GoogleCodeExporter commented 9 years ago
This issue was updated by revision 15f5e9521392.

now stokes executes (almost: only two lines are necessary) seamlessly in
parallel (see createSubMesh and look at _partitions= and _worldcomm that are
required for mixed spaces)
Stokes using lagrange multipliers (P0 continuous, the constants) to ensure
zero mean pressure in parallel does not
yet work, but should be very soon. 

the program executes in two flavors 

 * without lagrange multipliers (parallel), try the following command

 mpirun -np 8 ./feel_doc_stokes  --hsize=0.1 -ksp_constant_null_space 1
 -pc_type asm -ksp_monitor 

 this will execute  the example on 8 processors: 4 processors for the velocity
 and 4 for the pressure
 -ksp_constant_null_space 1 ensures the krylov solver doesn't elements in the
 null space of the operator
 -pc_type asm : uses the additive schwartz method as global solver

 * with lagrange multiplier: not yet in parallel and should yield the same
   results as before

git-svn-id: 
svn+ssh://forge.imag.fr/var/lib/gforge/chroot/scmrepos/svn/life/trunk/life/trunk
@9122 18f2cc81-8059-4896-b63e-f2d89ec8fd72

Original comment by christop...@feelpp.org on 16 Jun 2012 at 9:22

GoogleCodeExporter commented 9 years ago
This issue was updated by revision 32e6728f0802.

fix for sequential case

git-svn-id: 
svn+ssh://forge.imag.fr/var/lib/gforge/chroot/scmrepos/svn/life/trunk/life/trunk
@9125 18f2cc81-8059-4896-b63e-f2d89ec8fd72

Original comment by christop...@feelpp.org on 16 Jun 2012 at 9:23