KratosMultiphysics / Kratos

Kratos Multiphysics (A.K.A Kratos) is a framework for building parallel multi-disciplinary simulation software. Modularity, extensibility and HPC are the main objectives. Kratos has BSD license and is written in C++ with extensive Python interface.
https://kratosmultiphysics.github.io/Kratos/
Other
1.05k stars 246 forks source link

Defining interface for applying constraints #281

Closed adityaghantasala closed 7 years ago

adityaghantasala commented 7 years ago

Hello

@KratosMultiphysics/structural-mechanics @KratosMultiphysics/fluid-dynamics @KratosMultiphysics/gidproblemtype

We have been working on including Multi-point/freedom constraints(MPC) for structures. Now the implementation is consolidated, working and ready to be merged. A short summary of what and how is implemented is :

(All the implementation can be found on the branch "multipoint_constraints", there is also an example in the folder test_examples)

Now I want to open a discussion how would one like to interact with this utility which setup the constraint information which is later used. Currently the utility has a function which can be used to apply a constraint :

void ApplyConstraint(Node<3> &MasterNode, VariableComponentType& MasterVariable, Node<3> &SlaveNode, VariableComponentType& SlaveVariable, float weight)

So it expects a master node and the DOF and the slave node and its DOF together with the weight with which they a connected. In effect the following equation SlaveDOF = weight * MasterDOF

So when there are multiple slaves and masters, one has to loop over them and use this function to apply the constraint. It would be nice to have some ideas how one can enhance this utility. Also it would be important to include this into the problem type.

For example :

RiccardoRossi commented 7 years ago

@adityaghantasala, so you finally decided to go for a new builder and solver? i thought that the idea of changing the scheme was to avoid doing that...

i will try to take a look into your code when i can

adityaghantasala commented 7 years ago

@RiccardoRossi Yes I still want to do it with scheme but for that I need the pull request #247 to be finalized. Once this is done I will change it into a scheme. Here I would like to get ideas more on the interface.

RiccardoRossi commented 7 years ago

i will post here some comments as i see them while skimming through your code, other posts will follow

you should use the flag SLAVE instead of defining a variable IS_SLAVE

usage is

 node.Is(SLAVE)
 node.IsNot(SLAVE)

or

 node.Set(SLAVE)  //which corresponds to node.Set(SLAVE,true)

note that you can also check if the flag was actually set by doing

node.IsDefined(SLAVE)

this would tell you if the user did actually make a choice for the SLAVE status

adityaghantasala commented 7 years ago

@RiccardoRossi Noted .. It will change that. Also this SLAVE flag should be readily available right ? With out application adding this flag and so on !

jcotela commented 7 years ago

@adityaghantasala Yes, the flag is already defined. See https://github.com/KratosMultiphysics/Kratos/blob/master/kratos/includes/kratos_flags.h

msandre commented 7 years ago

After reading the comments in #283 I will post here about the definition of the interface.

I agree with the comment of @RiccardoRossi:

on a further refinement of my idea, you actually do not need the dof itself. What you need is the Key of the variable and the Id of the node.

i am telling it since this may be key to a much easier implementation in the mpi case.

From my perspective it would look something like:

void ApplyConstraint(IndexType SlaveNodeId, VariableComponentType& SlaveVariable, IndexType MasterNodeId, VariableComponentType& MasterVariable, float weight, int PartitionId=0)

to register the node ids, variable keys and weight.

Now in the serial case it's still straightforward to get the master dof and equation id when you need to assemble the slave element contributions. In the parallel case you also need to know the partition id of the master node. With this you can set up the communication pattern to transfer the equation ids to the slave side at assembly time.

Based on the current implementation in kratos, I think the simplest way is to provide master data in the form (node id, variable, weight, partition id) to the interface on the slave partition. In the case that the constraints are set in the pre-processor, the partitioner would need to ensure the node numbering is preserved and provide the partition id.

Another possibility would be to determine the partition id on the fly in which case we only provide master data in the form (node id, variable, weight). For this to work, the node ids would need to be reordered according to partition and these changes would need to be synchronized with the constraints.

Finally, suppose we want to redefine the constraints at every solution step using the mapper to couple interfaces / volumes. In this case the variables are probably the same. The weights and partition ids of masters should be known from the mapper. It should also be possible for the mapper to transfer the node ids corresponding to the weights. The rest is identical to the above case. Within the builder and solver it might also be possible to map equation ids directly.

RiccardoRossi commented 7 years ago

Hi Mike, I am not sure wether we shall pass a pointer yo the slave node ir Its id.

(Whole i would definitely go for id of the master).

My point here is that one Will generalmente "own" the slave node (i mean locally) as the slave is available as a minimum as "Ghost" for the normal builder and solver.

RiccardoRossi commented 7 years ago

Hi, to explain a little better my idea let's talk (pseudo) code:

let's suppose here that the following are variables of the class MultiPointConstraint (a global one containing at once all the MPCs:

std::unordered_map< std::pair<unsigned int, unsigned int>, 
                    std::unordered_map< std::tuple<unsigned int, VariableComponentType, int>, double >
                > 
                mDofConstraints; //this holds the definition of the constraint - can be constructed prior to EquationIds

//this stores a much simpler "map of maps" of EquationIds vs EquationId & weight
std::unordered_map< unsigned int, 
                    std::unordered_map< unsigned int, double >
                >
                mEquationIdToWeightsMap;

I believe that the class MultiPointConstraints should more or less have the following methods (similar to the ones you propose but "global" to a model part instead of defined on a given node.

//this function is to be used in DEFINING the constraint, it can be done at the beginning before having assigned the equationIds
//Mike, i would call this "AddConstraint" since we are defining it here, not applying it
void AddConstraint(IndexType SlaveNodeId, VariableComponentType& SlaveVariable, IndexType MasterNodeId, VariableComponentType& MasterVariable, float weight, int PartitionId=0)
{
    //here we can get the dof since we are sure that such dof exist
    auto& slave_dof = mp_model_part.Nodes(SlaveNodeId).GetDof(SlaveVariable);

    mDofConstraints[(slave_dof.Id(),slave_dof.key())][std::tie(MasterNodeId, MasterVariable, PartitionId)] += weight;

}

//this function is to be computed in the SetupDofSet of the builder to simplify the information of the mDofConstraints to obtain the mEquationIdToWeightsMap
//note that this can be understood as mounting a global matrix P such that
//Ptrans * Kglob * P xreduced = Ptrans b
//however if we want to do it prior to assembly we need fast access rather then fast multiplication, that's why i propose a map of maps'
//an interesting feature of the design i am proposing is that we could switch to use SpMM at any moment if we wish
void ComputeEquationIdToWeightsMap()
{
    ...
}

note that the function ComputeEquationIdTo... would be the place where MPIsyncing is to be done.

//we then need a small THREADSAFE function to be applied element by element on the basis of EquationIds alone (in the actual Build) 
ApplyConstraint( const Matrix& inputLHS,
                const Vector& inputRHS,
                const EquationIdVectorType& inputEquationId,
                Matrix& outputLHS,
                Vector& outputtRHS,
                EquationIdVectorType& outputEquationId
)
{
       here use the mEquationIdToWeightsMap ...
}
jcotela commented 7 years ago

Hi Riccardo, Just as a note, the decision of passing the pointer to the DOF and not just the EquationId was motivated by the fact that, if we only stored the EquationId, we encountered some impractical situations. In particular, the auxiliary class to store the constraint data has to be created before setting up the strategy, but at this point the EquationId is not assigned yet, so we had to invent some kind of trick or dummy initialization (or calling SetUpDofSet twice) to ensure that thinks worked as expected.

We figured out that going with the pointer was the lesser evil. It also has the minor advantage of being more robust if the connectivity somehow changes during the simulation.

RiccardoRossi commented 7 years ago

Hi @jcotela i understood your point. This is way i think we should have a "Define" function to which you pass node and variable and then internally a database translating this to EquationIds.

Such transalation can be done efficiently once you have run the SetupSystem of the builder and solver

adityaghantasala commented 7 years ago

@RiccardoRossi one small question ... should be put this global MPC container in builder and solver ?? or in scheme (I am not sure if it is possible). Then the question will be can we access this global MPC object from a scheme (assuming we want to apply/setup constraints from a process ) ??

Also another opinion I want to ask from you is, is it some way beneficial to have mDofConstraints on nodes and mEquationIdToWeightsMap in global level and everything else same as you mentioned. I was thinking like this for ease of access from a process or from python level. One can easily add this mDofConstraints there and the builder and solver function uses this information to make the global (with the MPI when needed) ?

RiccardoRossi commented 7 years ago

i would create a variable

MPC_POINTER

and do

 model_part.SetValue(MPC_POINTER, mpc_container1 ) //a mpc_container2 may well exist!!

then when you need to retrieve (ideally only once per build)

 auto pmpc = model_part.GetValue(MPC_POINTER)

my reasoning for having everything global is that one may switch from one list of constraints to another by just setting a pointer. Doing otherwise would imply looping through the list of nodes...

msandre commented 7 years ago

I didn't understand, what is SpMM?

A few more smaller decisions are:

Is it better to store the master ids/weights in arrays and use a single unordered_map from slave to master arrays (better memory locality) or to have nested maps (better flexibility)?

We should allow for different definitions of ApplyConstraint(). We have discussed 2 variants so far. One is as above with Ptrans Kglob P xreduced = Ptrans b and follows the more common approach of eliminating slave dofs. The other preserves the slave dofs in the vector of unknowns but any spd property is lost.

adityaghantasala commented 7 years ago

@msandre About definitions fo ApplyConstraint() function, I did not understand this " Ptrans Kglob P xreduced = Ptrans b" because, as I still have in mind we will follow the way where we do not eliminate the DOFs. ?? I am not considering the other approach, that is, eliminating the DOFs as this will need considerable amount of work.

About storing data locally or globally I think we can also achieve what @RiccardoRossi suggests that is switching the list of constraints, using the current approach of storing the data locally. But I think storing them globally will make things much convenient and robust. My be there is an application which needs this feature.

@RiccardoRossi Is there any other application in KRATOS where one has to switch between list of constraints ? That means switch between given lists of constraints ?

RiccardoRossi commented 7 years ago

Hi Mike, SpMM = Sparse Matrix Matrix multiplication.

One important feature of the proposed design is that the way we store the maps is actually a "detail", meaning that we will be able to change the internals in the future without changing our mind. Note that i am implicitly proposing constructing a class hierarchy of MPCs employing polymorphism. In any case the important point is that if we accept as i propose to have something equivalent to a map of dofs (the mDofConstraints in my pseudo code) and to use it to construct internally the "mEquationIdToWeightsMap" we have freedom in how we actually construct those data structure.

Furthermore, if (as i believe is the case) we aim at building the MPCs on the flight than we need a data structure that primes access time vs insertion time. This is particularly true for the "mEquationIdToWeightsMap" which will have to do the heavy weight during the build phase.

as you say we have a few options: (let's say that id1, id2 are The equation Ids of interest)

  1. (unordered) map-of-maps --> in this case: double& weight = mEquationIdToWeightsMap[id1][id2]
  2. (unordered) "single" map --> in this case: double& weight = mEquationIdToWeightsMap[std::pair(id1, id2)]
  3. (unordered) map-of-ordered vectors --> in this case: double& weight = mEquationIdToWeightsMap[id1][id2] (but we can get fast the whole row)
  4. employ a CSR storage --> double& weight = mEquationIdToWeightsMap(id1, id2)
  5. employ other Sparse matrix formats

I believe that both 1 and 2 shall be working nicely. For sparse access 2 may be a little more optimized. I agree however that option 3 may be the best for the kind of scenario you propose. Option 4 (or maybe 5) would be the way to go for the "global" approach, if we eventually decide to go for using SpMM. I would rule them out for now.

Regarding the function "ApplyConstraint" i am not sure i follow you. The "Ptrans Kglob P xreduced = Ptrans b" is exactly equivalent to the elemenwise approach if you pad the Ptrans matrix with zeros on the slave dofs. Note in any case that i would definitely make virtual the function "ApplyConstraint" so that one can overload it at wish.

Indeed other techniques exist to impose the MPCs, are u referring to this?

msandre commented 7 years ago

Hi Riccardo and Aditya, I believe you have two different techniques for imposing MPCs in mind and I just wanted to point out that we should stay flexible in this regard. Both can be formulated globally or at element level. @adityaghantasala I am not suggesting that you implement a different approach, only that you leave open the possibility for other variants to be implemented by others while reusing the remaining parts of your implementation. Virtual functions would allow this, but we would need to replicate this part of the code for serial/MPI versions.

RiccardoRossi commented 7 years ago

Hi @adityaghantasala first of all, i'll stress again that what we need to do now is to define well the API. I am NOT asking you to implement different approaches, just i would like to leave open the possibility for doing it in a future. I do believe however that a comparison of different approaches would lead easily to a publication (apart for the increased know how), and it is hence something we might want to invest on.

answering to your question, right now essentially there are no MPCs (only lagrange multipliers), so i can not make an example of what is there right now. It is easy to foresee however cases in which you have multiphysics, for example thermal and fluid sharing the same nodes. In this case you would want to have different MPCs on the "thermal_model_part" and on the "fluid_model_part" even though the nodes are in common.

adityaghantasala commented 7 years ago

Hi @RiccardoRossi , about having MPCs on different modal parts, this is something important I think. I am trying to make it work like this now. And I think the way you suggested is very convenient to do it. I will follow that as far as possible.

adityaghantasala commented 7 years ago

Hello @RiccardoRossi I am having a little trouble in retrieving a pointer from the model part using

auto pmpc = model_part.GetValue(MPC_POINTER)

I create a MPC data variable in a process and store it in a map in this process object. Then I assign the MPC_POINTER variable to the pointer of this mpc data object using model_part.Setvalue

Now when I try to access the pointer from the Builder and solver using GetValue the pointer is null. Also I tried it with other variables its the same. the values are zeros or NULLs.

Am I missing something in between ?

Other than this I think I have most of the stuff ready to test switching between sets of constraints.

RiccardoRossi commented 7 years ago

Hi Aditya, how do you define the variable?

Riccardo

El 20 abr. 2017 6:18 p. m., "adityaghantasala" notifications@github.com escribió:

Hello @RiccardoRossi https://github.com/RiccardoRossi I am having a little trouble in retrieving a pointer from the model part using

auto pmpc = model_part.GetValue(MPC_POINTER)

I create a MPC data variable in a process and store it in a map in this process object. Then I assign the MPC_POINTER variable to the pointer of this mpc data object using model_part.Setvalue

Now when I try to access the pointer from the Builder and solver using GetValue the pointer is null. Also I tried it with other variables its the same. the values are zeros or NULLs.

Am I missing something in between ?

Other than this I think I have most of the stuff ready to test switching between sets of constraints.

— You are receiving this because you were mentioned.

Reply to this email directly, view it on GitHub https://github.com/KratosMultiphysics/Kratos/issues/281#issuecomment-295798338, or mute the thread https://github.com/notifications/unsubscribe-auth/AHr7ESiqOQGE5am58PuXZ0qymgrzvrR2ks5rx4VIgaJpZM4M8ZBB .

adityaghantasala commented 7 years ago

Hi Riccardo

I am adding it in the standard way. In the application's variables.h and .cpp files.

RiccardoRossi commented 7 years ago

Hi Aditya, I expressed myself badly. Your variable is of type "Pointer "? (which is a shared_ptr?)

adityaghantasala commented 7 years ago

yes riccardo I am using a shared_ptr through Pointer of the class.

RiccardoRossi commented 7 years ago

mmm, then i don't have any "clever" suggestion.

what i would do is to try the following

 (take MPCTYPE as the type of your MPC)
  auto pnew = boost::make_shared< MPCTYPE >(...)
  KRATOS_WATCH(&pnew)
  model_part.SetValue(MPCPOINTER, pnew)
  KRATOS_WATCH(&model_part.GetValue(MPCPOINTER))
RiccardoRossi commented 7 years ago

also, are u sure you are setting it to the correct model part? database of model parts is not shared between the submodelparts

adityaghantasala commented 7 years ago

I am setting to the main_model_part which solver is using. so in my python script (pseudo code)

solver.importModelpart(main_model_part) solver.initialize(main_model_part) mpcprocess = MpcProcess(main_model_part) mpcprocess.execute() ----> here I am setting the pointer with model_part.SetValue(MPCPOINTER, new) So I am not using any submodelparts.

Inside the process object, I could access the pointer and use the mpc object. But once the it goes to builder and solver the pointer is null. Is it the case that the solver uses a different model part than the one passed to the proceses ? I do not think so right ?

RiccardoRossi commented 7 years ago

you may...

do a

 print(main_model_part)

and then

   KRATOS_WATCH(model_part) inside the builder and solver.

actually i think in there it is using a "ComputeModelPart" which is a submodelpart of main_model_part.

MAYBE a better option is to set the MPCPOINTER as

 model_part.ProcessInfo.SetValue(MPCPOINTER)

since the ProcessInfo is shared between main model part and all of the submodelparts. Note that in that case you will also have to retrieve it from ProcessInfo instead of directly from the model_part

adityaghantasala commented 7 years ago

Hello Riccardo

Both having to set the MPC_POINTER on compute model part and on the ProcessInfo worked. Thank you for the hint. I will push the code as soon as I have cleaned it.

adityaghantasala commented 7 years ago

Hello @RiccardoRossi I pushed the version of implementation based on your suggestions.

RiccardoRossi commented 7 years ago

Can we close this?