Open cstratopoulos opened 5 years ago
Hi,
Sorry for the late answer.
Maybe others would find such functionality useful?
Definitely. The more Numpy API coverage the better! Do you want to open a PR? I think this function can be implemented in xaxis_iterator.hpp
and be named apply_along_axis
as in Numpy, to avoid future backward incompatibility. The first implementation can throw when the axis parameter is not the first one.
Also would you like to tackle #472 ?
https://github.com/QuantStack/xtensor/issues/472 may be above my head at the moment.
This one I could maybe take on, but some discussion on scope/semantics might be helpful.
Should this take an evaluation strategy, or always immediately evaluate and return a container?
Input functions and return type computation. The super-simplified example I described above was for a scalar-valued function, but the numpy interface specifies
In this case it seems some care may be required to compute return types. I notice a similar dance is done e.g., with xreducer_result_container and xaccumulator_return_type, but in this case we would have to deal with transformations that preserve, shrink, or expand dimension respectively. I haven't put pen to paper yet (so to speak) but I'm wondering if this doesn't cause a combinatorial expansion of return types to deduce, or if there's existing machinery in the codebase that can help me here.
I think we want to have a lazy expression (I know, this makes things much more complicated;)). Regarding the number of types, I think we can go incrementally, first supporting reduction and 1D -> 1D functions (or even only reductions for now), and implementing other kind of functions after. Having incremental (small) PRs will be easier and faster to integrate.
We don't support generalized ufunc yet (vectorized function returning tensor with higher rank than its inputs) so it's not a problem if this feature does not either.
Regarding the number of types, at first sight there should not be that much, I' think a 3x3 matrix: (reducer / broadcaster / xfunction) x (xarray / xtensor / xtensor_fixed).
I think this could be encapsulated in a dedicated expression class.
Thanks for the clarifications! I see the issue is simpler in some ways than I had thought, but also I hadn't considered the reducer/broadcaster components, or how a lazy implementation would look, etc.
At this point my use of xtensor is still somewhat constrained by how it comes up at work, so I don't quite have the free time to dive into such a PR any time soon. For now I think my contributions to xtensor for now will probably be limited to opening issues or baby PRs, but I appreciate the offer!
Relates to https://github.com/QuantStack/xtensor/issues/472, https://github.com/QuantStack/xtensor/pull/514
I've been using xtensor a bit lately and have had occasions to do map-reduce-esque transformations on matrices, hence finding myself wishing for something like NumPy's apply_along_axis.
For my super specific use case (involving only
xtensor_fixed<T, xshape<M, N>>
) the following, withinput
as the input matrix andfunction
as the callable, has been adequate:Maybe others would find such functionality useful? Of course, the version which takes arbitrary axes depends on the issue/PR linked above.
Also if someone wouldn't mind I'm wondering if the approach above is sensible or idiomatic, or if perhaps there's a means I've overlooked using
xt
functions. To give a sketch example I'm implementing summations likea_i + cos(b_i * c_i)
as