Open faysou opened 4 years ago
I also need to loop over ndindices, and linear indices using an iterator (closely related ndenumerate).
@JohanMabille : would the following patch be acceptable for this task ? I haven't found any other way yet ...
--- include/xtensor/xiterator.hpp 2020-02-05 22:48:19.152093000 +0100
+++ include/xtensor/xiterator.hpp 2020-02-05 23:33:53.765354885 +0100
@@ -322,6 +322,9 @@
bool equal(const xiterator& rhs) const;
bool less_than(const xiterator& rhs) const;
+ index_type& index();
+ difference_type& linear_index();
+
private:
stepper_type m_st;
@@ -1103,6 +1106,18 @@
}
template <class St, class S, layout_type L>
+ inline auto xiterator<St, S, L>::linear_index() -> difference_type&
+ {
+ return m_linear_index;
+ }
+
+ template <class St, class S, layout_type L>
+ inline auto xiterator<St, S, L>::index() -> index_type&
+ {
+ return m_index;
+ }
+
+ template <class St, class S, layout_type L>
inline bool operator==(const xiterator<St, S, L>& lhs,
const xiterator<St, S, L>& rhs)
{
Here is a simple mwe, using a patched version of xtensor:
#include <iostream>
#include "xtensor/xarray.hpp"
#include "xtensor/xadapt.hpp"
#include "xtensor/xio.hpp"
int main()
{
xt::xarray<double> a = xt::arange<double>(0, 1 * 2 * 3).reshape({1, 2, 3});
a /= 2.;
for (auto it = a.begin(a.shape()); it != a.end(a.shape()); ++it)
std::cout << it.linear_index() << ' ' << xt::adapt(it.index()) << ' ' << *it << std::endl;
}
And the sample output:
0 {0, 0, 0} 0
1 {0, 0, 1} 0.5
2 {0, 0, 2} 1
3 {0, 1, 0} 1.5
4 {0, 1, 1} 2
5 {0, 1, 2} 2.5
@neok-m4700 that's more or less the idea, to reuse the implementation of xiterator
. However, xiterator
is bound to an expression while the ndindex
should be independent from any expression.
Splitting the current implementation between a pure index part and a pure stepper part might not be possible (these are very intricated), so an alternative would be to have a index
function that instantiates a dummy stepper (like the one we have in xscalar
).
Also the equivalent for the ndindex class should be convertible to the index it stores (although a method to explicitly retrieve the index it stores might be necessary).
@JohanMabille could this be implemented as a function that returns an xarray
std::vector<int> calcStrides(std::vector<int> shape)
{
std::vector<int> strides(shape.size());
int currentStride = 1;
for (int i = shape.size() - 1; i >= 0; i--)
{
if (0 < shape.at(i))
{
strides.at(i) = currentStride;
currentStride *= shape.at(i);
}
else
{
strides.at(i) = -currentStride;
currentStride *= -shape.at(i);
}
}
return strides;
}
int calcOffset(std::vector<int> shape, std::vector<int> strides)
{
int offset = 0;
for (size_t i = 0; i < shape.size(); ++i)
{
if (shape.at(i) < 0)
{
offset += strides.at(i) * (shape.at(i) + 1);
}
}
return offset;
}
std::vector<size_t> ind2sub(std::vector<int> shape, std::vector<int> strides, size_t ind)
{
std::vector<size_t> subs(shape.size());
for (size_t i = 0; i < shape.size(); ++i)
{
if (0 < strides.at(i))
{
subs.at(i) = ind / strides.at(i);
ind -= subs.at(i) * strides.at(i);
}
else
{
int absSub = ind / -strides.at(i);
subs.at(i) = -shape.at(i) - 1 - absSub;
ind -= absSub * -strides.at(i);
}
}
return subs;
}
xt::xarray<size_t> ndIndex(std::vector<int> shape)
{
auto size = xt::prod(xt::adapt(shape, {shape.size()}))();
auto linearIndex = xt::linspace<size_t>(0, size - 1, size);
std::vector<int> extendedShape = shape;
extendedShape.push_back(shape.size());
xt::xarray<size_t> subScripts = xt::empty<size_t>(extendedShape);
auto strides = calcStrides(shape);
for (const auto i : linearIndex) {
auto indices = ind2sub(shape, strides, i);
xt::xdynamic_slice_vector dynamicSlice;
for (auto index : indices) {
dynamicSlice.push_back(index);
}
auto loc = xt::dynamic_view(subScripts, dynamicSlice);
loc = xt::adapt(indices, {indices.size()});
}
return subScripts;
}
size_t DllExport sub2ind(std::vector<int> strides, size_t offset, std::vector<size_t> subs)
{
size_t ind = offset;
for (size_t i = 0; i < strides.size(); ++i)
{
ind += strides.at(i) * subs.at(i);
}
return ind;
};
Additionally, the ind2sub
could easily be used as the internal algorithm to a lazy loaded iterator that generates the index sequentially as the iterator is increamented.
It would be useful to have an equivalent function in xtensor to ndindex. https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndindex.html
This allows to iterate over all the nd indices of a shape regardless of the number of dimensions.
A workaround to emulate this functionality is to loop one single integer over the total size of a shape and imply the nd index from the integer and the related strides of the shape using some modulo arithmetic.