nutofem / nuto

NuTo - yet another finite element library
https://nuto.readthedocs.io
Boost Software License 1.0
17 stars 5 forks source link

Evaluate in the PDE concept - data transfer from structure level to integrand and vice versa #43

Closed joergfunger closed 6 years ago

joergfunger commented 7 years ago

In the PDE concept, the general layout is as follows. The structure has a list of (quadrature) cells. The only function of these cells is the integration, thus attached to each cell is an integration type. Furthermore, in each cell, an interpolation for a specific dof_type is defined, including the coordinate interpolation. For each integration point, a separate integrand function is defined that specifies what is to be integrated (e.g. for a standard mechanics problem this is B^T \sigma or K = B^T C B). This could however also include the time integration, so instead of assembling only K, directly assemble K+a/tC+b/(t^2)M with a and b coefficients from the time integration scheme such as Newmark). This requires to evaluate a variety of different objects (mass matrix, damping matrix, stiffness, residual, full hessian) with a variety of options and global inputs (residual w/o update of history data, implicit/explicit integration of history variables (IMPLEX, cycle jump with number of cycles), time step t. In order to avoid writing a separate global function (on the structure level) for each specific output (requiring different inputs)

Structure.EvaluateHessian0(), Structure.EvaluateHessian0(double t), ..

the following options are possible:

Solution 1: Option class

I would propose to write a single evaluate function that has an optionBase class as an argument. Note that this is very different from the current input/output, since it only includes global options (so no reallocation on structural, element and constitutive level is required, it's just passed by reference). This class also defines what should be stored in the corresponding single output objects.

Hessian0 = Structure.EvaluateSparseMatrix(myOptionsHessian0);
Gradient = Structure.EvaluateFullMatrix(myOptionsGradient);
Hessian = Structure.EvaluateSparseMatrix(myOptionsFullHessianIncludingTimeIntegration);

Here, we should decide, if we want to pass the outputObjects by reference (which has the advantage of being able to predefine the sparsity pattern of a sparse matrix), or by defining the result directly as an output. Both approaches would be possible. I would only write two global functions, one for full matrices (including vectors) and another one for sparse matrices. This could also be templated.

Output type would be an empty base class. For specific integrands, we would define certain options. Note that this option class can be redefined by any user in its main file and is then directly passed to the integrand. So the "user" can do very weird thinks without having to change any of the interfaces in NuTo, just change the optionClass and the integrand in his main file.

class OptionsTimeIntegration : OptionsBase 
{
private: 
  ResultEnum mResult; //e.g. Hessian0, Hessian1, Residual, ...
  double mDeltaT;
  bool mUpdateHistoryVariables;
}

class OptionsNewmark : OptionsTimeIntegration 
{
private: 
  double  beta, gamma;
}

class OptionsRungeKutta : OptionsTimeIntegration 
{
private: 
}

The disadvantage of this approach is a cast in the integrand from OptionBase to the SpecificType such as OptionsNewmark.

Solution 2: Shared object in the integrand

Another approach proposed by @TTitscher and @Psirus is to use a shared ptr in the integrand

/* main */
Newmark newmark(...)
Reference<TimeStep> timeStepObject = newmark.GetTimeStep();

IntegrandCreep integrand(creepLaw, ..., timeStepObject);
// other integrands may not require this object.

This has the advantage that no cast in the integrand is required, since the type of the options object is known. Furthermore, no option object has to be passed. However, there are also some disadvantages.

TTitscher commented 7 years ago

@joergfunger For a better understanding: Can you maybe write some lines of code on how the integrand reacts to the OptionsBase object? You pointed out the disadvantage of the cast. Do you propose to have a specific integrand for each time integration scheme? If so, this would be very hard to test / get right. IMO, an integrand should not know about time integration schemes. And it should be fixed, tested and hopefully never be touched again. Catchphrase: open to extension, closed to modification.

How should we define, what output is to be calculated for our time dependent problems (mass, stiffness, combined hessian for time integration)

This could be done on a CellBase-level. The public interface of CellBase contains Gradient(), Hessian0,1,2() and is valid for all cells and all integrands without modifications. Here, some kind of HessianCalculator like

using std::function<CellHessian(void)> HessianFct; // actually calculates the hessian on demand
CellHessian HessianCalculatorNewmark::Get(HessianFct h0, HessianFct h1, HessianFct h2)
{
    if (GetNumTimeDerivatives() == 0)
        return h0(); // does not evaluate h1()/h2()
    else 
        return h0() + beta/T h1() + gamma ... h2();
}

could be used before the CellHessian is assembled

Assembler::BuildHessian()
{
  for (auto cell : cells)
  {
    CellHessian hessian = mHessianCalculatorBaseInstance.Get(cell.Hessian0(), cell.Hessian1(), cell.Hessian2());
    mGlobalMatrix.Assemble(hessian);
  }  
}

Edit I just remembered from previous discussions that we wanted to solve the "time integration on element level"-issue later. So maybe ignore the code details here... Let us focus on the pure options (timestep and calculate static data).

Psirus commented 7 years ago

There seems to be some implied fixed interface for the cell, although maybe you differ on how that actually looks. I could image at least three possible interfaces,

Integrator::Integrate(Cell::Hessian0) // integrand is member of cell
Cell.BuildHessian0() // again, member
Cell.Integrate(integrand::hessian0) // integrand non member

possibly more. This would greatly effect other decisions and should be decided first. I'd prefer not to make even more classes, eliminating the first option. And even though I don't know yet how I would handle the history data, I find the last option most appealing. But as I said, this list is not exhaustive.

TTitscher commented 7 years ago

We refer to the current version of the interface, define here that somehow works. You are right, nothing is set in stone. But we can start somewhere until problems arise. If we can come up with ideas to solve the history data problem in your last option, sure, why not. If you have the time, you could provide some basic code snippets so we can discuss the idea.

joergfunger commented 7 years ago

See #44 for the discussion on the implementation of the option class for the history data. The standard mechanics integrand is only written once, newer touched again (in theory at least).

Regarding the public interface of the cell. Here, I would not define Gradient, Hessian_0, .., because those are strongly related to our (time dependent) problem. In order to make the interface general, there is only a single evaluate method as the only interface (maybe as a template) with different output objects like sparse matrices and full matrices (in the last case, the output object has to be used as an input as reference to allow to define different methods - overloading based on return types is not possible).

This evaluate routine takes then an additional Options class that defines what has to be done on the integrand level (the cell does not have to know what is integrated). That's why I think the option Cell.BuildHessian0() is suboptimal. @Psirus Why do you think that the integrand being a member of the cell is problematic? In your last option (Cell.Integrate(integrand::hessian0) // integrand non member), how would you loop over the integrands? My idea was to loop over the cells and in each cell loop over the integrands and then summing up the contributions of each integrand with the corresponding weights and detJ. If the integrands are not part of the cell, where would I store this? In addition, the current design provides some celldata classes that stores cell information that is required by all integrands in all cell (namely nodal values)

TTitscher commented 7 years ago

In order to make the interface general, there is only a single evaluate method as the only interface

I think, I get the idea and disagree. Consider the following math method

double Evaluate(double first, double second, eOption option)
{
    switch(option):
    case eOption::Add:
        return first + second;
    default:
        throw NotImplementedException();
}

Sure, this is very general. But I would prefer something like

double Add(double first, double second)
{
    return first + second;
}

Adding new functionality requires new code. In the first case, it is added as a new case in some switch and in the eOptions (recompile everything even though this is not the point here). In the second case, I just add a method. I think of it as if the eOption is indeed part of the interface. And is constantly changing. And to see all the possible math operations, I have to look at the eOptions implementation instead of just the member functions.


I have no complete solution to the actual issue, but I would prefer one without this options class, maybe following the Solution 2: Shared object in the integrand from the first post.

/* main */
SomethingThatRequiresExactlyTheseOptions object;
Reference<ExactlyTheseOptions> options = object.GetExactlyTheseOptions()
MatchingIntegrandThatRequiresExactlyTheseOptions integrand(..., options);

This is very similar to the idea that MatchingIntegrandThatRequiresExactlyTheseOptions casts some OptionsBase pointer to ExactlyTheseOptions. And I think both are not ideal...

joergfunger commented 7 years ago

Here is another idea that somehow (at least imo) seems to combine the single evaluation function. The general concept is to define a general cell like this

template < class TIntegrand>
class Cell
{
public:
    //! @brief calculates the integrated output
    DofVector<double> Evaluate( std::function<DofVector<double> (TIntegrand& rIntegrand, const NuTo::CellData& rCellData, const NuTo::CellIPData<TIntegrand::Dim()>& rCellIPData)> f)
    {
        DofVector<double> resultObject;
        CellData cellData(mElements);
        for (int iIP = 0; iIP < mIntegrationType.GetNumIntegrationPoints(); ++iIP)
        {
            auto ipCoords = mIntegrationType.GetLocalIntegrationPointCoordinates(iIP);
            auto ipWeight = mIntegrationType.GetIntegrationPointWeight(iIP);
            Jacobian<TIntegrand::Dim()> jacobian(mCoordinateElement.ExtractNodeValues(),
                                    mCoordinateElement.GetInterpolation().GetDerivativeShapeFunctions(ipCoords));
            CellIPData<TIntegrand::Dim()> cellipData(mElements, jacobian, ipCoords);
            resultObject += f(mIntegrand[iIP],cellData, cellipData) * jacobian.Det() * ipWeight;
        }
        return resultObject;
    }

private:

    const ElementSimple& mCoordinateElement;
    DofContainer<ElementSimple*> mElements;
    const IntegrationTypeBase& mIntegrationType;
    boost::ptr_vector<TIntegrand> mIntegrand;
};

Note in particular the Evaluate function, which takes as an argument an std::function with the integrand itself and the parameters required from the cell (cellData e.g. nodal values and IpData e.g. shape functions and its derivatives). This function is e.g. defined in a class of time dependent integrands

template <int TDim>
class IntegrandTimeDependent
{
public:
    static constexpr int Dim()
    {
        return TDim;
    }

    static NuTo::DofVector<double> ResidualIntegrand(IntegrandTimeDependent<TDim>& rIntegrand,
                                                    const NuTo::CellData& rCellData,
                                                    const NuTo::CellIPData<TDim>& rCellIPData)
    {
        return rIntegrand.Residual(rCellData,rCellIPData);
    }

    virtual NuTo::DofVector<double> Residual(const NuTo::CellData& rCellData,
                                                     const NuTo::CellIPData<TDim>& rCellIPData) = 0;

private:
};

ResidualIntegrand is a static function that takes the integrand itself as an argument. If someone has better solutions for this, please let me know.

Now, the Mechanics integrand can be derived from the time dependent formulation and only has to implement the Residual method

template <int TDim>
class IntegrandMechanics : public IntegrandTimeDependent<TDim>
{
public:
    IntegrandMechanics(const NuTo::DofType& rDofTypeDisplacements, const MechanicsLaw<TDim>& rLaw)
            : mDofTypeDisplacements(rDofTypeDisplacements)
            , mLaw(rLaw)
    {
    }

    NuTo::DofVector<double> Residual(const NuTo::CellData& rCellData,
                                             const NuTo::CellIPData<TDim>& rCellIPData) override
    {
        NuTo::BMatrixStrain B = rCellIPData.GetBMatrixStrain(mDofTypeDisplacements);
        NuTo::NodeValues u    = rCellData.GetNodeValues(mDofTypeDisplacements);
        NuTo::DofVector<double> residual;

        Eigen::VectorXd strain = B * u;
        residual[mDofTypeDisplacements]     = mLaw(strain);
        return residual;
    }

private:
    const NuTo::DofType& mDofTypeDisplacements;
    const MechanicsLaw<TDim>& mLaw;
};

Finally, the residual can be determined by

NuTo::DofVector<double> residual = cell.Evaluate(NuTo::IntegrandTimeDependent<2>::ResidualIntegrand);

In this time dependent Integrands, I would suggest that time and delta time are standard arguments to be passed. How this can be done is still open to discussion. Since I prefer the cell not to know about this in detail (only e.g. via an Option template class), my suggestion would be to introduce an additional Options class that is just another template parameter, so the evaluate method in the cell would look like

template <class TOptions>
DofVector<double> Evaluate( std::function<DofVector<double>
      (TIntegrand& , TOptions& , const NuTo::CellData& , const NuTo::CellIPData<TIntegrand::Dim()>& )> f,
                                TOptions& rOptions)

Unfortunately, the last lines do not compile yet (there seems to be a problem with defining the argument of the std::function as a template within the template member function. Replacing this with int seems to work (but does not serve the idea of having a general options class being passed to the Evaluate function).

template <class TOptions>
DofVector<double> Evaluate( std::function<DofVector<double>
      (TIntegrand& , int& , const NuTo::CellData& , const NuTo::CellIPData<TIntegrand::Dim()>& )> f,
                                TOptions& rOptions)

This option class would be specific e.g. for timeDependent pdes providing t (e.g. for calculating time dependent loads) and delta t (for rate dependent models)

TTitscher commented 7 years ago

I'm still trying to understand the details. Short maybe crucial remark (which I did not recognize this afternoon, sorry...) . Cell::Evaluate contains a TIntegrand in its arguments. So it is not possible to derive cells with different TIntegrand from the same base class. They can only be in a container that is a template itself like

template <TIntegrand>
class Domain
{
    OwningVector<Cell<TIntegrand>> mCells;
}

Polymorphism (on cell level) is gone.

I mean, now we can think about the advantages of this. We could have a Domain for each integrand (nodes may be shared between Domains). This is far from our current approach and likely to evolve to a header only NuTo. We can explore that as well.

Ok. I continue looking at the rest of your idea.

joergfunger commented 7 years ago

Well, this is only partially true. You are right that the polymorphism in the cell is lost. On the other hand, the polymorphism in the integrands itself is still existing. In the above example, IntegrandTimeDepend would be the base class for all time dependent integrands providing the general interface gradient, hessian0, ... Those functions (from the base class) are called in the global routines, e.g. Newmark. Similarly, I would write a Mechanics integrand that could essentially call the currently used evaluate routines from the constitutive formulations, i.e. a new constitutive formulation does not require a new integrand at all. However, if someone wants to directly write his own integrand without any constitutive formulation attached, he is free to do so. The great advantage in my opinion is that the user can easily implement and test very specific models by just providing an integrand and somehow the options on a global level. There is no need to implement

Structure.EvaluateGradient()
StructureEvaluateGradientViscous(double delta_t) 
StructureEvaluateCycleJump(numTimeSteps) 

and similar the corresponding Cell routines, which would be the case in keeping the polymorpism at the level of the cell.

TTitscher commented 7 years ago

1) I am sure that no one accepts the three (and more) different Evaluate methods at the bottom of your comment as the way to go.

2) The problem with the polymorphism. Maybe I did not fully understand your explanation. We currently have a container of ElementBase at the structure and all elements fit in there. How can you put cells with different TIntegrands in the same container, in the above implementation?

joergfunger commented 7 years ago
  1. Good point, I did not really consider this problem. Any suggestions?
TTitscher commented 7 years ago

Maybe reformulating the problem helps. Each time integration scheme interacts with integrands or cells in a different way. And we try to somehow pass all that through the same interface. It is very hard to design that interface to be valid for all (future) demands. And ideally, I do not need to touch (=recompile) my "Integrand-library" for new time integration schemes [and maybe, this is not possible...].

Anyways, motivations behind my suggestions: 1) A interface for basic interaction with history data. Set/Get methods for history data are required by all time integration schemes. And they are the same for all integrands. That's why, I think, this would be a valid interface. Ok, the arithmetic operators for each history data type may violate polymorphic principles, I'll think about that. 2) Communicate options outside the interface, e.g. the time step, through shared variables. If a specific time integration scheme needs to share its information with a specific integrand type, this should not be squeezed in general interface for all time integration schemes and all integrands. E.g. this should not be passed through the assembly. 3) (Please note that I am not fully happy with both suggestions above...)

Psirus commented 7 years ago

TL;DR: Take a look at this (compilable) gist, and see if it makes sense to you.

Okay, the direction of my idea is a bit different to what has been proposed, but shares the approach of "I give a function to the cell, which is then evaluated". As I've previously hinted, I'd like the integrand to be "outside" of the cells. So in the simplest case, the cell does something like this:

class Cell
{
public:
    double Integrate(std::function<double(double)> f)
    {
        double sum = 0;
        for(auto& ip : mIntergrationType)
            sum += ip.weight * f(ip.location);
        return sum;
    }
private:
    IntegrationType mIntegrationType;
}; 

where f is one of the functions of your integrand, say Integrand::Gradient. Of course, this is not nearly generic enough. We're going to need more information than just the IP location, and want to return more stuff than just doubles, mainly matrices and vectors.

Now, I think that using std::function is not versatile enough in this case. You may need to save the state of your integrand, add different options, etc. and it get unwieldy pretty quickly. Instead, I propose using the Command pattern where you "promote 'invocation of a method on an object' to full object status", e.g. "calculate hessian" becomes a command object that you can pass to the integration function of the cell:

class Cell
{
public:
    template <typename T>
    T Integrate(Operation<T>& op)
    {
        return 2*op.Calculate(*this);
    }
};

cell.Integrate(hessianOperation);

where 2*op is just a placeholder for the actual integration. Now you can integrate any Operation that has such a Calculate method. What happens inside the operation the cell need not know. ("Command decouples the object that invokes the operation from the one that knows how to perform it.")

An additional benefit might be that "materializing commands as objects means they can be passed, staged, shared, loaded in a table, and otherwise instrumented or manipulated like any other object."

How it all works together can be seen at this gist, which should compile and run.

A few words on the interface: std::function<DofVector<double>(TIntegrand& , TOptions& , const NuTo::CellData& , const NuTo::CellIPData<TIntegrand::Dim()>& )> f seems pretty wild to me, but I also know that I need to get this information to the function somehow. I have simply passed the cell itself to the operation. That has the additional advantage that I don't need to know whether the operation needs N or B or whatever that I would have to compute beforehand. If it calls cell.GetN() it does, and if it doesn't, it doesn't.

I know that my proposal doesn't explain how to deal with history data or options and all the other complicating factors, but solving everything at once probably leads to suboptimal solutions, so I leave it at that for now. Happy hacking.

joergfunger commented 7 years ago

The approach is quite interesting. There are several remarks from my side

Psirus commented 7 years ago
  • The cell should not include the gradient/hessian routine [...]

You're right it shouldn't, and it doesn't.


  • Passing the cell to this function is essentially similar to the above procedure where CellData and CellIpdata is passed. [...]

It is similar, and we agree that repeated evaluation of the same thing should be avoided.

Passing options or handling history data has not been considered.


  • One problem I see is the op.Calculate(*this) method in the cell [...]
TTitscher commented 7 years ago

Many other issues are somehow related to the pde concept. So we have to find a solution soon, even if its not the best. This is not the first time we perform major changes to the interface and will probably not be the last time. My suggestion is to have a meeting soon, decide something and start. Even if the solution is not optimal in every detail, it will be much better than our current approach, since

joergfunger commented 7 years ago

I would suggest to set up an example setup where everyone that feels obliged can contribute an example how he would solve the problem. Currently, there are very interesting approaches, and I think we are converging to a solution, but for the complete decision on what is a possible option, the example implementation should at least cover the following issues

Even though this is already a somehow detailed list, I think that we should agree on the initial design for these points - knowing that there are many more issues that will pop up after the initial design is complete.

joergfunger commented 7 years ago

During the last meeting, the discussion on the new PDE concept was rather controversial and we did not agree on a general concept. Before discussing the fundamental differences and difficulties of the approaches I would like to summarize on what we already agreed, then discuss the interface (how do we want to use it) and then finally compare different approaches.

What we already agreed on (imo)

How general should the interface be?

The standard usage would be the computation of matrices (stiffness, mass) or the internal/external gradient. Note that the external gradient (external force) is implemented in the same framework, it is just another integrand that is now specifying a time dependent volume/surface load that is integrated. Also note that this might include cells of different dimension in a single structure (e.g. 3D volume cells with 2D surface cells). The time integration scheme calls the structure to evaluate e.g. the stiffness matrix. Then the stiffness matrix loops over the cells, the cell loops of the quadrature point, computes the individual quadrature point contribution and then sums that up to the cell matrix. This cell matrix is then passed to the assembler that generates the global (sparse) matrix. So in our standard case, we would need to implement only a very limited set of functions

Each of these functions has a different output type (hessian=matrix, gradient=vector, update-no output, extractPostprocessing matrix with numIP*numData). So in theory this could be unified into 4 separate routines on the structural level (structure.Gradient, structureHessian(int numTimeDerivative,..), 4 routines on the cell level and 2 for the assembler (only matrix and vector have to be assembled).

Now different integrands might need different options (further called global options), in our time dependent standard problem this is the current time (e.g. to calculate external loads), the time step (e.g. in viscous models) and if an update of the history data has to be performed during the gradient calculation (relevant for explicit time integration schemes where evaluation of the right hand side and update is done at the same time). The way these global options are passed from the time integration scheme to the integrand is a controversial point in the discussion (see the next section).

Apart we want to allow a certain generality of the implementation to allow specific extensions without changes to the code that affect everyone.

Current examples that I have in mind are the cycle jump technique as well as the PGD approach. The cycle jump approach is basically a technique, where the damage accumulation (or more generally the increment of history variables) is calculated for a single fatigue cycle, the increment of the history variables is then multiplied with the number of cycles (assuming that the increment is small and constant for the subsequent cycles). With these new (extrapolated) history variables as the starting point, equilibrium has to be determined (so determining the displacement dofs, but with fixed history data) and then a new cycle with the new increment of the history variables is computed.

In the PGD approach, the discretization includes not only the standard spatial discretization (coordinates), but the displacements are also a function of other parameters such as the load at a specific point, the time, a material parameter (say e.g. Youngs modulus) or any other free parameter. The solution is then obtained in an iterative way by approximating f(x,t,E) = sum_i^N r_i(x)s_i(t)v_i(E). For each fixed i, one of the functions r, s, v with the corresponding dofs is calculated, whereas the others are assumed to be fixed (fixed point iteration). This might e.g. required to evaluate the derivative of the stress with respect to the Youngs modulus (given a prescribed set of history data).

For the two examples, there are additional functions (elastic stiffness, extrapolateHistoryData, derivative of internal force vector with respect to Youngs modulus) that are not implemented for the standard integrand and require other global options (such as the number of cycles to extrapolate with).

Last, but not least, I would keep the option of directly assembling into the global matrix (so instead of assembling K and M into sparse matrices, then add up K+1/delta TM and then assemble this into the matrix appropriate for the solver, assemble already on the cell level K+1/delta TM (for each cell/element) and assemble this small full cell matrix directly into the solver structure (e.g. trilinos). In this case, we would need another method for the hessian (that requires the time step) as well as another operator for the gradient (that includes M*a).

Possible approaches to meet the above criteria

The first question is how to pass the global options (such as time or delta time) to the integrand.

In most standard cases, this information is not needed (the time is only relevant e.g. for external loads, delta time only for viscous constitutive models). A straightforward suggestion is to construct each integrand that requires a specific global option with a reference/shared pointer to a global variable, e.g. the time step of the time integration scheme. The advantage of this approach is that the interface is very clean (since no options have to be passed) and that is is very flexible since almost all options could be transferred to the integrand. The first disadvantage of this approach imo is that the user interface and the information passing is very complex and kind of hidden (when calling an integrand function, I'm a really sure that all integrands are having the same reference to the option and actually more important, have I set all relevant options correctly). This could be illustrated e.g. when dealing with the Newmark scheme and the global time that is at some point modified (e.g. when convergence is reached). At this point, an update of the history variables is performed and the time step is increased. In the current implementation, the order of the two steps is not important (since currently nobody uses time as input to the constitutive model). When using the global time directly as a shared variable in the integrand, the order of the two steps might influence the update (using then either the old or the new time is used). I'm not saying that this could not be solved, but for standard users (that don't have the global time as a parameter for the integrand) they won't even realize that their might be something wrong (since they don't pass time anywhere to the integrand). In a nutshell, I propose a fixed set of parameters/options for a specific set of integrands (such as time dependent integrands that can be used in the standard Newmark scheme) or the cycle jump integrands. If this set of options is fixed, it could also be passed (via reference, or maybe even bundled in an option class) to the integrands.

Other (solveable) questions are related to the allocation of the integrands and the setting of the references/shared ptrs to the global variables (such as t or delta t of the time integration scheme). This would either mean allocating first the time integration scheme and then these global variables can be given in the constructor of the integrand, or we first construct the mesh, then construct the time integration scheme and in another step set the shared ptr for the variables in the integrands.

Another issue is related to the function interfaces that we provide on the structure level (that are then also implemented on the cell and the integrand). The standard functionality that we currently have are gradient, hessians (0..2) and ip data for visualization purposes. The hessian calculations could be unified in one method (with the derivative as a parameter). However, a consequent implementation of this shared variable approach would be to just have a method that calculates a (hessian) matrix - how this matrix is calculated should be decided by a shared variable passed to each integrand. This would then result in kind of a switch case for each integrand depending on the global parameter that decides, if - when calling the matrix routine - either the mass, damping or stiffness matrix is calculated, or (if we directly want to assemble the complete hessian matrix we would assemble K+1/t²M with a fourth option).

In order to circumvent this problem, I would propose to use different methods (stiffness, damping, mass, full hessian of Newmark) for each task. However, this would mean to implement this method several times (on the structural level, the cell level and then the integrand level), even though on the cell and structural level we would just do a copy and paste. In order to circumvent this problem, the operator concept is quite handy. The idea is to define an integrate method on the structural as well as on the cell level. These functions have an operatorClass as a parameter, which itself provides a method integrate.

class Hessian0
{
public:
    NuTo::DofMatrix<double> Integrate(NuTo::IntegrandBase& rIntegrand, const NuTo::CellData& rCellData,
                                      const NuTo::CellIPData& rCellIPData) const override
    {
        return rIntegrand.Hessian0(mT, mDeltaT, rCellData, rCellIPData);
    }
};

At the point, where the cell integrate method loops over all integrands/integration points, the cell calls Integrate function of the operator.

/// Integrate function in the cell to evaluate a matrix
    DofMatrix<double> Integrate(const Operator::OperatorRMatrix& rOperator) override
    {
        DofMatrix<double> resultObject;
        CellData cellData(mElements);
        for (int iIP = 0; iIP < mIntegrationType.GetNumIntegrationPoints(); ++iIP)
        {
            auto ipCoords = mIntegrationType.GetLocalIntegrationPointCoordinates(iIP);
            auto ipWeight = mIntegrationType.GetIntegrationPointWeight(iIP);
            Jacobian<TDim> jacobian(mCoordinateElement.ExtractNodeValues(),
                                    mCoordinateElement.GetInterpolation().GetDerivativeShapeFunctions(ipCoords));
            CellIPData cellipData(mElements, jacobian, ipCoords);
            resultObject +=
                    rOperator.Integrate(mIntegrandBase[iIP], cellData, cellipData) * jacobian.Det() * ipWeight;
        }

        return resultObject;
    }

Further details of a complete trial implementation can be found in https://git.bam.de/ttitsche/PDE_Framework/tree/OperatorVersion

In a nutshell, we would need to write Integrate functions on the structure and cell level that integrate into a vector, matrix (and another empty one for just looping over the integration point to e.g. perform an update of the history data). An operator has to be written, this operator could also store the global options that have to be passed to all integrands.

Note for the ones that are familiar with the previous templated approach: In order to avoid the template of the structure, each integrand is no derived from a base integrand that provides all possible methods as virtual methods implemented with an exception. In this way, any operator can call those methods. Their are two drawbacks of the approach - first, the extension of adding additional functions to the integrand always requires to add the method as an additional virtual function to the base integrand. However, I hope this is not too much of a problem, since we currently only have a limited set of possible extensions in mind. Another disadvantage is the possible call of integrand functions (e.g. the gradient function) for integrands that do not implement this functionality (e.g. cycle jump integrands). This will result in an runtime error (vs. a compile time error for the templated version).

Implementation approach

The final discussion is related to the implementation approach we are following to implement the (substantial) changes into NuTo. The first option is a gradual rewrite of the mechanics module, the second is starting from scratch and adding all the functionality until all the test files are running again.

I'm open to both approaches, though from my personal opinion I would prefer to start from the current mechanics modul, completely remove all the functionality that needs a rewrite and then iteratively add the new approach until all tests are working. This preference is primarily due to the fact that I would like to have small pull request to still allow strategic design decision to be discussed and also to allow the implementation of possible changes without requiring major changes of the code again (which would be the case when the first version that actually compiles is already the final version with all changes implemented).

Looking forward to feedback from anyone that is strongly affected by these changes.

Psirus commented 7 years ago

Options, options, opioids

My assumption was that we needed to get a multitude of parameters through a common interface, i.e. someIntegrand.Hessian0(someOption), anotherIntegrand.Hessian0(differentOption), stupidIntegrand(ahhMoreOptions). The solutions proposed were either "Use references" or this "Operator" pattern. Now that I'm reading it, the differences are not so much the integrands, but the "methods", i.e. "standard", cycle jump, PGD. Still, if we want to reuse the cells, its interface to the structure and the interfaces of the integrands need to allow this.

I'm trying to come up with a short overview to make discussion easier. If I'm missing a pro/con or have misrepresented an idea, please comment.

Use references

// in main
auto newmark = Newmark();
auto& time = newmark.GetTime();
auto integrand = CreateIntegrand(time);
// in time integration
cell.Evaluate();
// in Cell
integrand.Evaluate();
// in Integrand
something = mTime * someThingElse;

Pro:

Con:

Casting

// in main
auto newmark = Newmark();
auto integrand = CreateIntegrand(time);
// in time integration
auto timeOption = TimeOption(); // inherits from OptionBase
cell.Evaluate(timeOption);
// in Cell
integrand.Evaluate(OptionBase);
// in Integrand
auto time = dynamic_cast<TimeOption>(option);

Pro:

Con:

Operator

// in main
auto newmark = Newmark();
auto integrand = CreateIntegrand();
// in time integration
auto hessianOperator = Hessian(time);
cell.Evaluate(hessianOperator);
// in Cell
operator.Evaluate();
// in Operator
integrand.Evaluate(mTime);
// in Integrand
something = time * someThingElse;

Pro:

Con:


So I'd probably vote for Operator, but am open to argument.

Implementation approach

I'll admit it: I'm afraid of "the rewrite". As much as I like the thought of bulldozing the whole caboodle, imho we'll end up with something that does only 75% of what the pre-PDE version did.

I'm more in favour of an incremental approach, so for example:

Ideally, we would have a way of tracking duplicates, so they don't linger around after we're done. I think this approach is more feasible than deleting before you start, and you would still have bite-sized PRs.

Psirus commented 7 years ago

I'd have to revise my stance on the Operators as they are now. Somehow, I thought they would be

Operator::Evaluate
{
    mIntegrand.Evaluate(mTime); // i.e. Operator has it's own DerivedIntegrand
}

This would have allowed decoupling the interface of Cell (just takes an Operator) and of Integrand (takes whatever the Operator gives it). But in the current proposal, the cell passes its integrand to the operator, essentially deleting the type information of the Integrand. Now the whole things seems pointless, since there is no decoupling anymore. Sorry that I did not see this earlier.

TTitscher commented 7 years ago

@Psirus This is because of the history data. It is stored at the integrand. IMO this is reasonable, since the integrand knows exactly what type of history data it needs. However, this requires that all operators need the integrand as an argument. And since the integrand itself has to be stored somewhere - as IntegrandBase, the type information is lost here.

One more remark concerning this topic, as implemented here. This does not allow you to freely write your own integrand. Every custom method has to be implemented as a virtual function in IntegrandBase.h. So if you write your custom time integration with custom options, there is no way to pass them to your custom integrand (without modifying the NuToLib::IntegrandBase). But IMO this was a requirement of the new concept.

As always, this could be solved via dynamic_casts...

struct CustomHessian : public Operator<Matrix>
{
  Matrix Evaluate(IntegrandBase& integrand, celldata, ipdata)
  {
    CustomIntegrand& ci = dynamic_cast<CustomIntegrand&>(integrand);      
    ci.Hessian(mOptions, celldata, cellipdata);
  }
  private:
    CustomOptions mOptions;
}
Psirus commented 7 years ago

I'd value "open to extension, closed to modification" over "no casts!!!11", so no virtual mySpecialFunction in IntegrandBase, but rather an admittedly no-so-nice cast in Operator.

joergfunger commented 7 years ago

As a consequence, this would mean that we do not need a base integrand class, but rather implement the time dependent integrand as the standard with functions for gradient, hessian0..n. In case of other integrands - such as cycle jump - we would always require a dynamic_cast. Fine with me as well.

TTitscher commented 7 years ago

I am sorry to delay this whole PDE process, but I am still far from convinced of this operator concept...

It is not possible to squeeze any kind of options though a common interface in a nice way. One solution is to not use a common interface ==> everything is template. But IMO this is also not a nice way, has its own problems and we voted against that. So back to the common interface. I still vote for the reference idea and I am aware that it is not perfect either. But let me try to address some criticism.


when calling an integrand function, I'm a really sure that all integrands are having the same reference to the option and actually more important, have I set all relevant options correctly

and related:

This would either mean allocating first the time integration scheme and then these global variables can be given in the constructor of the integrand, or we first construct the mesh, then construct the time integration scheme and in another step set the shared ptr for the variables in the integrands.

I would set all relevant options in the ctor. No much room for errors here, since all references have to be set. For a specific case, another idea is to allocate the TimeControl object first and pass it to the time integration and the relevant integrands in an arbitrary order.


[In newmark ...] In the current implementation, the order of the two steps is not important. (since currently nobody uses time as input to the constitutive model) the order of the two steps might influence the update

As you already said, this can be solved and checked/documented with tests.


[Regarding Hessian0/Hessian1/Hessian2]... In order to circumvent this problem, I would propose to use different methods (stiffness, damping, mass, full hessian of Newmark) for each task. However, this would mean to implement this method several times (on the structural level, the cell level and then the integrand level), even though on the cell and structural level we would just do a copy and paste.

The method declaration would be implemented three times. Copy-paste of the actual code can (almost always) be avoided. E.g. by using lambdas (And ideally, this should have been a free function that can be used for other cell types as well...)


"impure" functions, stateful, members, flags, interactions, assumptions, misery

What do you mean by that? (And NuTo almost only consists of impure functions, why is it a big deal now?) In my idea, all references have to be const and the underlying variable can only be changed in one place. And it is by no means a global variable that is accessible from every part of the program. It is just a reference to an object. Would you call the IntegrationTypeGauss4N4Ip a global variable, just because it is allocated only once in most of the cases? Just wanted to make that clear since I am also strictly against "global variables" (which would solve our problem as well... Just define a global double time and use/change it whenever you need to. This would be devastating.). But maybe this is just a misunderstanding.

Have a look at the elements for example. They all have references to InterpolationType, IntegrationType, ConstitutiveLaw, Nodes and so on. They aren't even const and can be changed from the outside. The value of the history data, the nodal values, some parameters of the constitutive law or even the number of integration points may change from one Evaluate() to another. There often is a Set... method for that, however, it has the same flaws as an outside reference. You never know who modifies what when. Sure, we can get rid of 'impure' functions and make everything static. Or not. I am just trying to understand the problems with the reference approach.


And just for the records. IMO the whole casting discussion for the operator eliminates the nice interfaces pro argument from the list above (even though it is very subjective anyways). :smile_cat:

Edit: Quickly adding what I dislike about the operators

Psirus commented 7 years ago

What do you mean by that?

Well, compare what the Integrand would look like, reduced to this difference. First pass by value

struct Foo
{
    double f(double x)
    {
        return 7*x;
    }
};

int main()
{
    Foo foo;
    std::cout << foo.f(5.0);
}

vs. references

struct Foo
{
    Foo(const double& x) : mX(x) {}

    double f()
    {
        return 7*mX;
    }

    const double& mX;
};

int main()
{
    double blurb = 5.0;
    Foo foo(blurb);
    std::cout << foo.f();
}

The first is shorter and easier to understand. Testing is trivial. Any discussion about ownership or lifetime becomes moot, whereas with the second they need to be carefully considered (think about how often we discuss this). You need to be sure nobody changed blurb between two calls to f, if you want to get the same result.

NuTo almost only consists of impure functions, why is it a big deal now?

It is not a huge deal, but almost every blog post/SO answer/software engineering book/conference talk I see/read all consider pure functions to be the better alternative if it is available. Sure this needs to be weighed against other concerns, but all else being equal, I'd choose the "purer" design.

Would you call the IntegrationTypeGauss4N4Ip a global variable, just because it is allocated only once in most of the cases?

I never called this reference approach "global variable". Also, I could make the case that the element/cell Integrate would be "better" if the IntegrationType were not a reference. But this would needed to be weighed against e.g. where to store the affiliation between Cell and IntegrationType, and how to pass this to Integrate and so on, and given these concerns, the reference is quite nice there.

My argument was really based on "all else being equal", and I can see how the current design of the Operator, Integrand and Cell don't satisfy this condition. This doesn't mean that references, coupling and hidden dependencies are without costs.

TTitscher commented 7 years ago

Indeed, your value vs reference example makes the references look silly. And I get the point now. The reference we currently use are not intended to change. They could be stored as values/unique_ptr. The reference is only used to avoid storing all this data one million times. The references in my approach are intended to change and are really function arguments that bypass the interface without casts. And it is certainly not best practice. We can even introduce a custom type to make that clear. Like

template <typename T>
using InterfaceBypassReference = std::reference_wrapper<const T>;
using IntendedToChangeFromOutside = std::reference_wrapper<const T>;

... or a custom type

template <typename T>
struct OptionsReference {...};

Well, I am not sure how to go on from this point. I propose to implement a stripped down basic version of our PDE_Framework. Linear elastic mechanics law and a themo law (to have some coupling) plus their integrands first. No history data, no options, just Integrand::Gradient(...) and Integrand::Hessian(...).

We currently discuss the next layer, the Cell. The Cell might have a Cell::IntegrateMatrix/Vector(Operator) or directly Cell::Gradient()/Hessian(). Based on the outcome of our discussion, we can easily change that. Then, on the structure level, there again will be a Structure::Gradient() (or Assembler::Gradient(cells)?). This either calls Cell::Gradient() directly or wraps it in an Operator. Easy to change as well. [And we proved that both concepts are capable of handling history data and options in this PDE_Framework repository, each with its own ugliness]

And from there on, we can work on all the remaining parts. Assembler, visualization, exchange of DofType in the BlockMatrix/Vector classes, constraints, ConvertToInterpolationType(), node selection, Groups, class Mesh?, IGA, boundary cells for neumann BCs, ...

joergfunger commented 7 years ago

The main problem I currently see is that we implement everything directly in the code - there is no real distinction between what are core feature of nuto and what is my specific implementation (this could be everything from FETI, contact, creep, up to cycle jump). IMO, the core features should reduce to the assembly of the any matrices (not just specific hessians) , the solution, result visualization, restart options. On the next level are auxiliary functions such as time integration schemes, element formulations, and on the last level are the integrands (I would still have a basic list directly in NuTo, but decoupled from the other features).

I'm worried that in the current approach the amount of code drastically increases and will make it very difficult to introduce changes (because then already the number of test files is enormous). So besides the introduction of flexible dof types and decoupling of PDE and integration, I would like to change the structure the way to separate core features from the rest. The operator concept with the cell being a template of an integrand (not for each specific integrand, but of a class of integrand that shares common methods such as time dependent integrands with gradient, hessian,..) allows to really separate the assembly of any type of integrands and the definition of operations on that with NuTo only being used as a library - no code inside the core has to be changed. The main argument against this concept was that the structure as a container for the cell class had to be a template. I perfectly agree that this is not preferable (though we are currently discussing changes of the structure to remove its god-like character). So here is another suggestions that has different advantages (in addition to the NuTo as a Lib option) What about separating the finite element (that is the current cell with the interpolation functions and the nodal dof values) from the integration cell (that is the current cell only with the integration and the integrand). The integration cell stores a reference to the finite element (the interpolation definition). Basically, the integration cell only consists of the history data and the information, on how to extract global matrices from that. The container for the integration cells with the integrands can now be a template, but it only includes a single list of integration cells (that all share specific methods). The rest no longer has to be a template at all. In addition to avoiding the template for the structure, this has more advantages

TTitscher commented 6 years ago

I really like the idea of splitting NuTo into e.g. libNuToCore, libNuToLaws, libNuToTimeIntegration, libNuToCells, ... This means, that you can also write libCustomCells, libCustomIntegrands, and so on, outside of NuTo. This is a major requirement. As we both showed in our PDE_Framework challenge, this is easily possible in many different ways. I just want to point out that this is no exclusive feature of the operator concept with the cell being a template of an integrand concept. Other approaches can also fulfill this requirement.

And I like the PDE_Element. This seems like a useful collection of the interpolation functions and makes passing these elements around less annoying.

However, I do not understand the templates. You remove the template from the structure by removing the cell container from the structure. Ok, this could have been done without the PDE_Element. Where are cells stored now? Just a class CellContainer<IntegrandSomething> that lives in my main scope? You suggest that this cell container stores the history data. How does it know which history data to store? Say, I combine linear elastic cells, mises plasticity cells (4IP) and damage model cells (1IP). How could the storage format look like?

The argument against the structure template (or the template anywhere else) were IMO not general things like compile time or personal preference.


I just want to add that we should get used to the thought of not making it right in the first attempt anyways. So I still suggest to move fast towards a very simple (linear elastic) version including assembly, global solve, visualization. This is mainly independent of this data transfer discussion (since linear elastic has no data/options).

joergfunger commented 6 years ago

I'm not insisting on the proposed idea of the cell as a template, even though I'm still convinced that the approach has some advantages (and also disadvantages) compared to other approaches.

-"The other argument against the template was the unnecessary complexity. [...]Why do we need more flexibility? IMO, this seems to be already a problem with the current implementation that combines cycle jump and standard time dependent problems. If we are not passing the global information such as time and delta time via shared ptrs but directly as an option, we would require to implement on the cell/structural level methods such as ElasticGradient(), Gradient(t, delta_t), Hessian_0 ... Even more important, we would have to have a base class for all integrands to implement all possible interfaces as virtual functions and then overload this function for specific integrands. This does not allow an extension with different user-defined methods without changing the Integrand base, thus requiring to recompile NuTo as a whole. In the template/operator concept, there is no need to change anything in the NuTo-lib - just define your own integrand interface and a corresponding operator, and you can use the functionality of the lib. As an example, the cycle jump approach can be implemented without any change in the NuTo core (see the template branch in the separate PDE-git, where the cycle jump approach is purely implemented in the application folder with a main file (plus an auxiliary header file to separate the cycle jump definitions from main.cpp)

-"Lastly, Cell makes testing significantly harder - compared to CellInterface. The latter one can be mocked, e.g. for the unit test of the visualizer, you only provide a dummy return value of the CellInterface::SomethingWithVisualize() method. Or for the Assember test, you provide values for ::DofNumbering() and ::Hessian0(). Everything is nicely decoupled. For Cell you have to fully include the class and fully include an integrand." The cell container does only include a very limited set of methods. Furthermore, the cell is not of the type integrand, but integrand interface. So mocking this class by proving methods such as Hessian_0, and then making the cell a template of this class should be possible, even though I'm not an expert with this. If this is not possible with the standard, there is certainly the option of defining your own test class that is derived from the integrandinterface and just returns fixed values. (The integrand interface for the time dependent problem just provides four methods to be mocked, the ones for the Hessians can even be tested together, since there is no method Hessian_0, Hessian_1, Hessian_2 but only a single Integrate method that returns a matrix (another that returns a vector, and another that returns a scalar).

Just to conclude, I agree to move forward and start the implementation. We will certainly come across additional problems as soon as we implement the whole chain from the integrand up to the final assembly and solution. However, I would suggest to have an idea, how this can be implemented without changing the whole concept -at least for the current requirements of combining standard time dependent problems and cycle jump/implex.
However, if we do not agree on a design pattern, I would agree on starting to implement the pde concept only for the time dependent problem with gradient, hessian0..2 as the only methods defined on the element/cell container level. At least for this standard case, I'm optimistic that we agree on the design. I also want to highlight that I won't be the one who will work on a daily basis with NuTo - if most daily users favor another approach we should do so.

TTitscher commented 6 years ago

As for the mix of the time integration schemes: I have not thought of deriving the integrand from two integrand interfaces. This would be possible. But then I have to copy say NuTo::IntegrandLib::MomentumBalance to MyExamples::ModifiedMomentumBalance and change the parent classes. [edit: Again, not thought through. There is no need to copy. The new ModifiedMomentumBalance inherits IntegrandLib::MomentumBalance.]

As an example, the cycle jump approach can be implemented without any change in the NuTo core (see the template branch in the separate PDE-git, where the cycle jump approach is purely implemented in the application folder with a main file (plus an auxiliary header file to separate the cycle jump definitions from main.cpp)

In the "titscher" branch, I did exactly the same. No problem. IMO there is no need for an additional "elastic gradient" or "elastic hessian". CycleJump/Impl-Ex modifies the evolution equation and I am solving a different problem. And with respect to the modified evolution equation, I'd still call it the valid "gradient" and "hessian". This is what I meant with "Why do we need more flexibility?". As for the implementation, I introduced a class EvolutionEquation. E.g. EvolutionImplicit() for newmark, EvolutionExplicit() for rk, EvolutionExtrapolation(numCycles) for cycle jump. This object also contained all the history data. Maybe I think about that again.

Yep, and let us continue for now. I mean, 95% (98%?) are Newmark :ghost:

joergfunger commented 6 years ago

Maybe the question is more obvious with the integrate matrix that might return stiffness, damping, mass, elastic sitffness (for cycle jump) or the full hessian in Newmark (K+f(beta,gamma)/t^2 M). In the proposed template approach, there is only a single function IntegrateSparseMatrix, and the options (e.g. beta, gamma) can be passed via the operator. If someone wants to implement another stiffness, mass (lumped or full), the only thing to be changed is the operator and an additional integrand interface that provides this functionality. In the "titscher" branch, this selection of different integrands is realized via options that are transferred to the integrand via shared references. I would prefer avoiding this - even though we are doing this often (such as node merge), but I would prefer trying to reduce this calling pattern, since it makes it imo more difficult to understand what is going on. However, if this is the preferred option favored by most developers, we should do it. Newmark is currently about 75% (10 developers, 2 using explicit schemes, 0.5 cycle jump and IMPLEX, but three PGD projects are going to be started) :smirk:

TTitscher commented 6 years ago

By now, I realize that many people are not happy with my PDE concept. Especially the pattern with hidden methods like

Vector Gradient(...) override
{
    if (mOutsideChangingReference)
        return GradientCaseA(mOutsideChangingOptionA);
    else
        return GradientCaseB(mOutsideChangingOptionC, mOutsideChangingTime);
}

If we require a variable interface for the gradients (and avoid my proposal), IMO, the operator concept is the only option. As an alternative to the template approach, we can use a dynamic approach. Basically replacing the static polymorphism (templates) with a dynamic (casts) one.

static:

template <typename TIntegrandInterface>
struct Cell
{
    Vector Integrate(Operator<TIntegrandInterface>& op)
    {
        op(mIntegrand, ....);
    }

    OwningContainer<TIntegrandInterface> mIntegrands
};

template <typename TIntegrandInterface>
struct Gradient : Operator<IntegrandInterface>
{
    Vector operator()(TIntegrandInterface& integrand, ...)
    {
        integrand.SpecialCustomGradient(....);
    }
};

dynamic:

struct Cell
{
    Vector Integrate(Operator& op)
    {
        op(mIntegrand, ....);
    }

    OwningContainer<boost::any> mIntegrands
};

struct Gradient : Operator
{
    Vector operator()(boost::any& integrand, ...)
    {
        auto actualIntegrand = boost::any_cast<IntegrandInterface&>(integrand);
        integrand.SpecialCustomGradient(....);
    }
};

boost::any is a neat implementation of a polymorphic type. This avoids having IntegrandInterfaceBase + custom casts.

The dynamic implementation allows to have the usual .cpp files and an actual libNuTo. Afaik, @joergfunger is also happy with the dynamic approach. In the unlikely case of performance problems cause by this cast, we could, (after careful measurement!)

Psirus commented 6 years ago

I'm in favour of that! However, why do we need auto actualIntegrand = boost::any_cast<IntegrandInterface&>(integrand); instead of auto actualIntegrand = boost::any_cast<MyConcreteIntegrand&>(integrand);, i.e. do we need a IntegrandBase at all?

TTitscher commented 6 years ago

Yep, you are right. We do not need a IntegrandBase as a base class of all integrands (thanks to boost::any).

But if you want to combine multiple PDEs, you have to define multiple integrands with the same interface in order to control them with the same operators. So it is certainly advantageous to provide a IntegrandTimeDependentInterface with the common methods Gradient, Hessian, .... Then, you can derive an IntegrandMomentumBalance or an IntegrandHeatTransfer from that interface and control all with the same operators.

And each new IntegrandSuperCustomInterface requires a new set of operators.

joergfunger commented 6 years ago

I'm also in favor of this concept.

Psirus commented 6 years ago

I think your snippet

struct Gradient : Operator
{
    Vector operator()(boost::any& integrand, ...)
    {
        auto actualIntegrand = boost::any_cast<IntegrandInterface&>(integrand);
        integrand.SpecialCustomGradient(....);
    }
};

really is all the code required for an operator, so I would just write that in my main file and have no predefined operators in NuTo (except for testing). I'd rather have the "real" integrand type in the operator, and no need for these base classes, than being able to reuse these two lines of code here.

joergfunger commented 6 years ago

How would you then define the standard operators that are used in Newmark, such as Gradient? And you would have to write an operator for momentum balance and thermal equilibrium, but they have essentially the same interface for time dependent problems (gradient, hessian_0..2)

Psirus commented 6 years ago
  1. I'd probably do something like this
    
    struct Gradient : VectorOperator
    {
    Vector operator()(boost::any& integrand, ...) {  /* see above */ }
    };

Gradient gradientOp; newmark.SetGradientOperator(gradientOp);


i.e. there is no "standard operator".

2. I think this is an advantage of my approach: you could name your methods in your `PDE/Integrand` something more like `InternalForces`, `StiffnessMatrix`, `ConductivityMatrix`, ..., and then have the operator do the mapping between time integration "need a hessian 1" and momentum balance "have a damping matrix". And if someone doesn't need a parameter, or wants to change the order for their integrand, they can do that in the operator.
joergfunger commented 6 years ago

But how do you handle two different integrands (say one for the external load and one for the internal load with momentum balance) that are both different integrands, but share the gradient interface (and Hessian as well ..)?

TTitscher commented 6 years ago

The IntegrandInterface thing is handy if you want to combine different materials. Some cells have Integrand0 and some have Integrand1. If there is no common interface, you cannot define an operator that works on both Integrand0 and Integrand1. Therefore, you need a common base class.

Psirus commented 6 years ago

It's dynamic. In boost::any, I can do if (integrand.type() == typeid(MomentumBalance)) ....

joergfunger commented 6 years ago

But if someone (and this is the probable standard case when NuTo as a lib is used) just wants to write its own time dependent integrand, then there is IMO no need to rewrite the oprators (nor change something in the timeIntegrationScheme) with a special gradient Operator, only derive your integrand from IntegrandTimeDependentInterface. In the above suggestion, I would have to redefine a gradient operator.

Psirus commented 6 years ago

I don't want to change something in the time integration scheme. You need to define a gradient operator, but not "redefine", and you have to do that either way, question is if we want the user to define it or the library.

And I wouldn't have to derive my integrand from anything.

joergfunger commented 6 years ago

What I meant with changing in the time integration scheme is the definition of a special gradient operator (in the initial idea, the gradient operator is directly linked linked to the Newmark scheme, implicitly defined through the library) and does not have to be stored. Also the user knows what the interface is that has to be written to use a custom standard "time-dependent" integrand (or any other type that is further developed such as cycle jump). An what is the advantage of avoiding the virtual interface of the time dependent integrands and replace it by a switch based on the typeid?

Psirus commented 6 years ago

And this is the part I don't like. You "implicitly define" the behaviour and interface of my integrand based on the need of your time integration scheme. I think the operators are a great point to decouple "weak form" and "time integration" from each other.

Wrt to the user: if you're not used to old nuto, I would find

just as difficult or "weird".

However, if everyone wants this, go ahead.

joergfunger commented 6 years ago

The interface of myIntegrand is only implicitly defined if I want to use it with the predefined time integration scheme in the lib. If you want to use your own solution scheme, you are free to define any operator/interfaces of the integrand. But in case you want to use the already implemented schemes, you should use their interfaces. I don't fully see what the difference is between defining an arbitrary integrand interface and then couple this arbitrary interface via a definition of a gradient (that calls a special function of this integrand and interprets this as a gradient) compared to directly implementing a gradient function. You are also free to define in your integrand a function ForceBalance, and in the gradient function just call this function.

Psirus commented 6 years ago

"you should use their interfaces" - exactly, and I think the interface of Newmark should be VectorOperator and MatrixOperator. IntegrandBase is imho one step too far. If I'm restricted to IntegrandBaseForNewmark, then I might as well drop the operators altogether (yes, I know that is not literally true).

Oh and the difference is: no IntegrandBase, no Hessian1 function when you would rather call it DampingMatrix, no Gradient(t, deltat) when InternalForces() would suffice.

TTitscher commented 6 years ago

I'm glad that we all like this approach and can finally move on. As for your discussion, the good thing is, that this is independent from any interface and does not influence the code design (in the near future. I mean, there is a lot of work to do until we'll touch newmark)... I'll try to implement the stuff soon.

@Psirus I think I understand that you like the freedom to use an arbitrary integrand with newmark, without having to implement a specific interface. I personally would prefer a common interface since I really do not like to write these operators (lots of boilerplate, duplicate options, ...) and rather use predefined ones. We may allow both approaches by having a default gradient/hessian operator in newmark that you are allowed to override.

Anyways, how can you pass options from newmark (I mean inside newmark) to these custom operators? Say, you want to define DampingMatrix(deltaT). How is newmark able to pass deltaT in there? And how does it know that it should be deltaT and not globalT?

Psirus commented 6 years ago

As I said, if you all like your predefined operators, go ahead.

The interface of the operator would have to be the same (it does derive from MatrixOperator), so you would have

struct Damping : MatrixOperator
{
    Matrix operator()(boost::any& integrand, double t, double deltaT)
    {
        auto momBalance = boost::any_cast<MomentumBalance&>(integrand);
        return momBalance.DampingMatrix(deltaT);
    }
};

We may allow both approaches by having a default gradient/hessian operator in newmark that you are allowed to override.

Sounds good.

TTitscher commented 6 years ago

Maybe, I should provide a working implementation first, before we go into detail with those discussions. There is no t and deltaT in the MatrixOperator. The options are custom and have to be defined in derived operators. Think of the numCycles option. This should not be in MatrixOperator.

So the operator, as I think of, is more like

struct Damping : MatrixOperator
{
    Damping(double deltaT) : mDeltaT(deltaT) {}
    Matrix operator()(boost::any& integrand, celldata... cellipdata...)
    {
        auto momBalance = boost::any_cast<MomentumBalance&>(integrand);
        return momBalance.DampingMatrix(mDeltaT);
    }
    double mDeltaT;
};

And this makes it hard to set deltaT in Newmark.

Psirus commented 6 years ago

Solved in PDE.