xtensor-stack / xtensor

C++ tensors with broadcasting and lazy computing
BSD 3-Clause "New" or "Revised" License
3.33k stars 398 forks source link

Propagate row_major / col_major for certain view's #1023

Closed wolfv closed 6 years ago

wolfv commented 6 years ago

When "viewing" a row_major xcontainer like this:

xtensor<double, 4> a;
view(a, 1, 2, all(), all());

we can trivially propagate that the view is row major as well. This information will be used by the assign loops to speed up assignments / copies into containers.

This came up during the perf runs for GooseFEM.

wolfv commented 6 years ago

This will become quite cool: the xview doesn't even have to own it's shape & strides because it just needs an offset for the original shape/strides

wolfv commented 6 years ago

For the generic case we could also switch to something like a tuple based shape:

so that a list of slices like such would become a tuple like this:

all, newaxis, all, int -> size_t& @m_shape[0], integral_constant<size_t, 1>, size_t& @m_shape[1], int

etc... but this would require a lot of refactoring + we could not use the shapes as we do right now with begin/end iterators etc.

wolfv commented 6 years ago

Just a quick note for people interested: It seems that our current apply - metaprogramming mechanism generates function pointers - which are quite a bottleneck. I'll attempt to remove the apply mechanism from our data_offset computation mechanism for now.

wolfv commented 6 years ago

merged in #1039