Closed ukoethe closed 6 years ago
ping @wolfv
I agree that the norms should be explicitely named. Beyond the inconsistency between the C++ standard and numpy, having a norm
function implies an arbitrary choice regarding the norm used for its implementation, and this may not be intuitive.
It seems like a reasonable thing to do and we can also consider splitting/renaming the functions in xtensor-blas. Given these functions, creating a numpy like interface would be trivial and could be implemented in another header if we seen it necessary.
What about norm(x, tag)
and use tag dispatching?
What about norm(x, tag) and use tag dispatching?
This would also work, but is IMHO inferior
_l2
).norm()
.The tag dispatching can come on top of the explicitly named norms. Thus we can have some mechanism for generic programming, but in my opinion it should be complementary to the explicitly named norms, not replace them.
Question: It would be natural to implement norm functions by means of xreducer
. However, the current xreducer design is not flexible enough. Keeping the basic structure of aggregate()
intact, computations proceed in three steps, which in general require three different functors:
init(value_type) -> result_type
accumulate(result_type, value_type) -> result_type
merge(result_type, result_type) -> result_type
In the current implementation, step 1 is simply an assignment, and steps 2 and 3 share function m_f
with signature m_f(value_type, value_type) -> value_type
(current master) or m_f(result_type, result_type) -> result_type
(PR #435). This simplification works for the currently implemented functions amin()
, amax()
, sum()
and prod()
, but not for norms (as becomes apparent when value_type is std::complex).
The obvios solution is to add two additional template arguments to xreducer
, which represent the functions init
and merge
. The new arguments can be provided with defaults that reproduce the current behavior.
Do you have any comments or better ideas?
Wouldn't a functor class for reducers with the three functions (and sensible defaults) also be an option and (maybe) a cleaner design? Or are there drawbacks?
Btw. we should probably also plan for accumulators soon (e.g. cumsum
and related).
@ukoethe I really like your idea!
Gathering the three functors into a single one as suggested by @wolfv (something like a generic xreducer_functor
which accepts 3 functors so we can still use existing functors) would avoid additional arguments to the reduce
functions, and help keep the number of template argument of xreducer
low, however there may be drawbacks that I'm not aware of. What do you think ?
Ok, good suggestion.
Please see #436 for a proof-of-concept implementation.
Can this be closed?
From my point of view, it can be closed. I just kept it open as a reminder for the necessary follow-up changes in https://github.com/QuantStack/xtensor-blas.
I filed a seperate issue in xtensor-blas. I think I'll get around to fix it in xtensor-blas next week.
Computing the norm (or, actually, different types of norms) of an array is a very frequent requirement. At present, one needs the xtensor-blas extension and an external BLAS library to get a
norm()
function. I suggest that xtensor should support the most important norms natively by implementing ufuncs:These are computed elementwise, regardless of array dimension. Thus,
norm_l2(matrix)
is the matrix' Frobenius norm. In addition, 2D arrays should support the induced norms (maximum of absolute row or columns sums)The xtensor-blas library should be adapted to this naming convention. It should provide optimized versions of these functions (using BLAS primitives) and additional norms such as
which require matrix decomposition algorithms not available in the xtensor core.
The function name
norm()
should be avoided because its semantics are inconsistent between the C++ standard (which unfortunately definesnorm()
to compute the squared norm) and numpy.