Open czgdp1807 opened 10 months ago
cc: @certik
xtensor provides an API similar to numpy, where reshape
changes the shape of the passed array. xtensor is missing a free funcion reshape
, but if we add it, it should forward the call to the reshape
method of its argument.
For creating a new array of a given shape, you can use the empty free function, which has less restrictions than the reshape
funciton in Fortran.
Actually I am trying to port https://github.com/lfortran/lfortran/blob/main/integration_tests/arrays_reshape_14.f90 to C++ using xt::xtensor
APIs. For transforming the following line to C++,
I want the following to work.
// xt::xtensor_fixed<double, xt::xshape<256>> b
// xt::xtensor<double, 2>& a
// xt::xtensor_fixed<int, xt::xshape<1>> newshape
b = xt::reshape(a, newshape); // would be great to have xt::reshape which can work for this case.
I am using xtensor
and xtensor_fixed
because they map well to fortran types. I might be wrong but AFAIK, xarray
doesn't store any information related to the rank of the array but Fortran specifies rank of the array in its type itself (analogous to xtensor
).
I am open to ideas and different approaches for this problem.
xtensor is missing a free funcion
reshape
, but if we add it, it should forward the call to thereshape
method of its argument.
I see. The return type of xt::reshape
would depend on the size of its second argument (a fixed size 1 D array).
I might be wrong but AFAIK, xarray doesn't store any information related to the rank of the array but Fortran specifies rank of the > array in its type itself (analogous to xtensor).
xarray
provides the rank information at runtime only, while xtensor
and xtensor_fixed
can provide it at build time.
I am open to ideas and different approaches for this problem.
I think the implementation could be something like:
template <class E, class S>
auto copy_and_reshape(E&& e, const S& shape)
{
using value_type = typename std::decay_t<E>::value_type;
auto res = xt::empty_like<value_type>(shape);
std::copy(e.cbegin(), e.cend(), res.begin());
return res;
}
We can probably find a better name, but it should definitely not be reshape
to avoid confusion with the existing reshape feature.
xtensor is missing a free function
reshape
, but if we add it, it should forward the call to thereshape
method of its argument.I see. The return type of
xt::reshape
would depend on the size of its second argument (a fixed size 1 D array).
Not sure to get you, what I meant is that xt::reshape(tensor, shape)
should call tensor.reshape(shape)
that reshapes tensor
in place (therefore the return type would be void
), sorry if my first message was not clear.
Not sure to get you, what I meant is that
xt::reshape(tensor, shape)
should calltensor.reshape(shape)
that reshapestensor
in place (therefore the return type would bevoid
), sorry if my first message was not clear.
For example if we want to reshape, xt::xtensor<double, 2> a
(say of shape [16, 16]
) to xt::xtensor<double, 1> b
(say of shape [256]
) then xt::reshape(a, {256})
should return an object of type xt::xtensor<double, 1>
. However, if we want to reshape a
to xt::xtensor<double, 3> c
(say of shape [16, 8, 2]
) then xt::reshape(a, {16, 8, 2})
should return xt::xtensor<double, 3>
. Basically the return type depends on the size of shape
argument. If shape
is of size n
then returned xt::xtensor
would be of rank n
. Even if are able to define the signature of xt::reshape
, implementing it might be tricky. In Fortran, reshape
is an intrinsic and its upto the Fortran compiler on how it implements this feature depending upon the way it handles array internally.
I think the implementation could be something like:
I see. Is this implementation doing the same thing as reshape
feature of Fortran as I described in my above comment? Seems like it's creating an object res
with shape
and then contents of input e
are copied to it? If so then it is making sense to me.
Basically the return type depends on the size of shape argument. If shape is of size n then returned xt::xtensor would be of rank n
Ha I see; indeed xtensor
does not allow that, since the reshape is done in place, and since the rank of an xtensor
object is known at build time, reshape
must preserve the rank.
Seems like it's creating an object res with shape and then contents of input e are copied to it? If so then it is making sense to me.
Yes, that's exactly what it is doing. Also notice that the initial tensor values are not initialized before the copy. This avoids iterating over the whole buffer to set 0 everywhere.
@JohanMabille thanks!
Just so that I understand: the code in https://github.com/xtensor-stack/xtensor/issues/2760#issuecomment-1875978814 is currently the only way to take a 16x16 xtensor
and reshape it into a new 16x8x2 xtensor
?
The current facility is the xtensor::reshape
, but that must keep the number of dimensions the same (and the number of elements of course), so it can change 16x16 to 8x32, but it can't change it to 16x8x2, correct?
@certik That's it. This limitation on xtensor::reshape
is because it operates in place, like in NumPy. I think we don't want to mix "copy" and "in-place" behaviors in the same function (at least implicitly), so an additional free function (although the name copy_and_reshape
might not be ideal) looks like a good option to me. Also it would work for any kind of expressions (even those which are not evaluated).
EDIT: maybe eval_reshaped
could be a good name?
Great to hear. It would be great have this functionality in xtensor
library.
EDIT: maybe
eval_reshaped
could be a good name?
Sounds good to me.
I think
xtensor
has no intrinsic likereshape
.reshape
intrinsic in Fortran doesn't modify the input array but creates a new array with the shape specified in second argument. It has no restrictions like the new array must have same rank as the input array. The only restriction it has is that the shape argument must be of fixed size.xtensor
has a method called,.reshape
but it modifies the array/tensor itself. Doesn't behave likereshape
in Fortran. Also it has the restriction that the new shape must be of same size as the rank of input array (obviously because it is modifying the input array).xtensor
hasxt::reshape_view
which comes close toreshape
but again it just shallow copies and doesn't work likereshape
at all. If you try to use it likereshape
intrinsic it won't be compiled by clang/gcc. So would it be possible to add an intrinsic like,xt::reshape
which would accept two arguments. First one,xt::xtensor
orxt::xtensor_fixed
orxt::array
- the input array. Second one, would be anxt::xtensor_fixed
or{...}
(array constant in C++). And returnxt::xtensor
object?