In MLJ a table is always assumed to be features-as-columns. If we are allowing the MLJ user to input a matrix (possibly sparse) as in this PR instead, then for consistency this ought to be features-as-columns as well, but the core methods expect features-as-rows.
To make the interface consistent, one could change this line to _reformat(X, ::Type{<:AbstractMatrix}) = X' (ie, add adjoint). If the MLJ user supplies his input X (from MLJ) as the adjoint of a features-as-rows matrix, then the two adjoint operations will compile to a no-operation, and there will be no loss of performance.
I'm kind of assuming here that the MulticlassPerceptron core method can handle any AbstractMatrix, including adjoints, which it probably should be capable of doing. Moreover, it can presumably detect when the user has passed data in an non-optimal format, and issue an @info recommending an alternative representation (if verbosity > 0).
In MLJ a table is always assumed to be features-as-columns. If we are allowing the MLJ user to input a matrix (possibly sparse) as in this PR instead, then for consistency this ought to be features-as-columns as well, but the core methods expect features-as-rows.
To make the interface consistent, one could change this line to
_reformat(X, ::Type{<:AbstractMatrix}) = X'
(ie, add adjoint). If the MLJ user supplies his inputX
(from MLJ) as the adjoint of a features-as-rows matrix, then the two adjoint operations will compile to a no-operation, and there will be no loss of performance.I'm kind of assuming here that the MulticlassPerceptron core method can handle any AbstractMatrix, including adjoints, which it probably should be capable of doing. Moreover, it can presumably detect when the user has passed data in an non-optimal format, and issue an
@info
recommending an alternative representation (if verbosity > 0).