BAMresearch / fenics-constitutive

Complex constitutive models beyond the FEniCS UFL.
https://bamresearch.github.io/fenics-constitutive
MIT License
13 stars 2 forks source link

Fenicsx rewrite -- Eigen vs Xtensor #32

Closed srosenbu closed 5 months ago

srosenbu commented 1 year ago

Fenicsx allows to work directly on the array of the Function object

v = dolfinx.fem.Function(some_function_space) v.vector.array returns a numpy array that refereces the data directly -> no copy of the data

Therefore it would be beneficial to write C++ classes that change the arrays in place instead of copying them before. In the current implementation there exist up to 3 copies of an array.

In Xtensor, which is based on the numpy memory layout, it is very easy to work on numpy arrays that are passed as a reference. But Xtensor is still slower than Eigen, especially views on arrays are slow. Xtensor is also used in dolfinx.

In Eigen, we can pass a reference of a numpy array, but we are basically limited to vectors, since numpy is mostly row-major and Eigen is column-major.

joergfunger commented 1 year ago

Eigen can be configured the way to either use row or column major format .

srosenbu commented 1 year ago

It can, but in the Link you shared, they also write

The default in Eigen is column-major. Naturally, most of the development and testing of the Eigen library is thus done with column-major matrices. This means that, even though we aim to support column-major and row-major storage orders transparently, the Eigen library may well work best with column-major matrices.

Which does not fill me with confidence to actually use their row-major implementation. But most of the time, we will be working on vectors anyway and therefore this is more of an annoyance than a real issue.

joergfunger commented 1 year ago

That is rigth, but from my experience that is still well tested and should not be a problem. In case there is, Eigen is also still actively developed and we could send a bug report/create an issue.

srosenbu commented 1 year ago

Alright. I would suggest that we do some tests on the speed difference though, because Xtensor is already a dependency in dolfinx and since it is similar to numpy, it may be easier to work with for someone who is used to python.

joergfunger commented 1 year ago

I'm totally fine with both options, but there are already discussions and comparisons e.g. https://github.com/xtensor-stack/xtensor/issues/2269

srosenbu commented 1 year ago

I just pushed some example to measure the speed difference for a very simple linear elastic material law that I implemented some weeks ago. WARNING: The code will only work on my machine, because I linked all libraries manually in the setup.py. test.py IMO Xtensor views are too slow to use for our purposes right now. For the example I implemented, the speed up compared to a loop in Python is not significant, whereas an implementation with Eigen and a row-major Elasticity matrix is even faster than pure numpy operations. Below are the timings for the law evaluation in ms for the different methods [('eigen', 387.2965555638075), ('xtensor', 6234.527637250721), ('python_loop', 23149.966636672616), ('numpy', 782.2382841259241)]

Since passing by reference works fine for vectors and row-major matrices in Eigen, I would suggest, that we continue to use Eigen.