Open veeshy opened 6 years ago
Looking at NodalEqualValueConstraint::computeResidual(), the moose assembly residual block is called to adjust directly:
DenseVector<Number> & lm_re = _assembly.residualBlock(_var.number());
though I'd guess this would be initialized already before computeResidual is being called.
Doesn't mortar seg because it can't run in parallel?
On Wed, Mar 14, 2018 at 5:33 PM, Vishal Patel notifications@github.com wrote:
Looking at NodalEqualValueConstraint::computeResidual(), the moose assembly residual block is called to adjust directly: DenseVector
& lm_re = _assembly.residualBlock(_var.number()); though I'd guess this would be initialized already before computeResidual is being called. — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/idaholab/moose/issues/11018#issuecomment-373087611, or mute the thread https://github.com/notifications/unsubscribe-auth/AHPulD1zvGCqoKYORmZzfI6uNWifPpM0ks5teUZAgaJpZM4Sqwyq .
Same question, does mortar stuff support MPI?
On Wed, Mar 14, 2018 at 10:43 AM, Yaqi notifications@github.com wrote:
Same question, does mortar stuff support MPI?
The current implementation does not.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_idaholab_moose_issues_11018-23issuecomment-2D373091327&d=DwMCaQ&c=54IZrppPQZKX9mLzcGdPfFD1hxrcB__aEkJFOKJFd00&r=3VAyeCMeLVVC6uUgEL2ffJ0a2Q62YeuUUIYP8elb-II&m=QdE_OWsUMp6s2D-uN6tGixGDqDmg1xfWQX1-jfRDxnY&s=vQyzVUMv8vwhqC5vNAt5KQrFIWL7TewvVkoqfDC1OMo&e=, or mute the thread https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_AAFOgIuyOQ6JzZuPY6-5FueAfoRVIi-2DZX-2Dks5teUiogaJpZM4Sqwyq&d=DwMCaQ&c=54IZrppPQZKX9mLzcGdPfFD1hxrcB__aEkJFOKJFd00&r=3VAyeCMeLVVC6uUgEL2ffJ0a2Q62YeuUUIYP8elb-II&m=QdE_OWsUMp6s2D-uN6tGixGDqDmg1xfWQX1-jfRDxnY&s=Gh3qy6la6jnQdt2-iT9ahXgjJXCB_rMP8_6cUB9duQM&e= .
mpiexec -n 2 does work, but maybe that's just lucky if mortar doesn't support MPI.
My guess is it works if all elements of the master and slave happen to be on the same processor. So, yes, there is a way to get lucky.
On Wed, Mar 14, 2018 at 10:48 AM Vishal Patel notifications@github.com wrote:
mpiexec -n 2 does work, but maybe that's just lucky if mortar doesn't support MPI.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/idaholab/moose/issues/11018#issuecomment-373093407, or mute the thread https://github.com/notifications/unsubscribe-auth/AAMWPu4cQ8aF1u0PgYFFjXSqiqEVzj-fks5teUn0gaJpZM4Sqwyq .
Dang, any reasonable work around? My use case is connecting two nodes on 1D domains with a value multiplier so block_1 node end = multiplier * block_2 nodes start. LM works lovely for this, penalty methods work to connect nodes but adding in multipliers gave them troubles (reasonably so I think).
I wish.
Using mortar in parallel (or something similar) has been on my wishlist for years now. Would allow us to benchmark the Ferret app against FFT codes. Maybe one day.
On Wed, Mar 14, 2018 at 5:54 PM, Vishal Patel notifications@github.com wrote:
Dang, any reasonable work around? My use case is connecting two nodes on 1D domains with a value multiplier so block_1 node end = multiplier * block_2 nodes start. LM works lovely for this, penalty methods work to connect nodes but adding in multipliers gave them troubles (reasonably so I think).
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/idaholab/moose/issues/11018#issuecomment-373095590, or mute the thread https://github.com/notifications/unsubscribe-auth/AHPulK6PuibHZwaB1oWK2uGnP4faCP2dks5teUtagaJpZM4Sqwyq .
We found using InterfaceKernel (or some people call it penalty method with conforming mesh) is a reasonable workaround. It does not enforce the constraints strongly but remains consistent to analytical solutions.
Is there a use case in 3D I can look at?
Does it work for enforcing elastic strain periodicity (instead of displacements)?
On Wed, Mar 14, 2018 at 6:11 PM, Yaqi notifications@github.com wrote:
We found using InterfaceKernel (or some people call it penalty method with conforming mesh) is a reasonable workaround. It does not enforce the constraints strongly but remains consistent to analytical solutions.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/idaholab/moose/issues/11018#issuecomment-373101708, or mute the thread https://github.com/notifications/unsubscribe-auth/AHPulFr4pXtMG04X_hrJG9hHiv9Ft1ivks5teU9WgaJpZM4Sqwyq .
You might want to take a look on this folder moose/test/tests/interfacekernels
. The kernel will be depending on the physics I think: in radiation transport (convection + reaction), we use upwinding, diffusion we can use interior penalty. You will need to consider the conservation.
Mortar is being re-written. We will have a new version out sometime in the next several months.
On Wed, Mar 14, 2018 at 5:17 PM Yaqi notifications@github.com wrote:
You might want to take a look on this folder moose/test/tests/interfacekernels. The kernel will be depending on the physics I think: in radiation transport (convection + reaction), we use upwinding, diffusion we can use interior penalty. You will need to consider the conservation.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/idaholab/moose/issues/11018#issuecomment-373103538, or mute the thread https://github.com/notifications/unsubscribe-auth/AC5XIEKu03yeehJ0-bT-fkMb8AZOeVWmks5teVCZgaJpZM4Sqwyq .
Any chance there is any update on this?
We will have a new version out sometime in the next several months.
Next several months 🤣
Any chance there is any update on this?
I think there's a decent chance that we will have mortar in by the end of FY19
Mortar tests segfault using mpiexec -n 4. I first noticed it on the 1d NodalEqualValueConstraint, but it happens in 2D and 3D tests in tests/mortar too.
Running in moose tests
mpiexec -n 4 ./moose_test-dbg -i tests/mortar/1d/1d_nodebc_name.i Mesh/uniform_refine=3
causes a segfault. Valgrind suggests:some more discussion at: https://groups.google.com/forum/#!topic/moose-users/XaicmUatBEA
Fixing this will allow using mpi for larger problems when LM's are needed.