Open TTitscher opened 7 years ago
I'm not sure if these lazy evaluation optimization really improves the speed, due to our usage of the vectors/matrix. However, I agree that writing our own interface for matrix/vector multiplications is sub optimal. However, I don't see a nice solution in particular for the sparse matrices. Even when we switch to Eigen::SparseMatrix
, I would guess that accessing this via block routines is suboptimal (after having checked the documentation, I'm not sure if there are block methods available for sparse matrices anyway)
The
Block
types implement arithmetic operators without any state of the art expression template lazy evaluation. If we derive theBlockVector
directly fromEigen::VectorXd
, we only need to provide operators to access a certain.segment
for a specific dof type. This may involve some index magic but sounds feasible.Maybe we can even benefit from the lazy evaluations provided by
Eigen::SparseMatrix
. But the block operations are limited. Read access is no problem though.