Open PallHaraldsson opened 1 year ago
Looks like stack
should do this?
julia> A = rand(3,3,2)
3×3×2 Array{Float64, 3}:
[:, :, 1] =
0.000416832 0.0668757 0.744363
0.550376 0.031153 0.19192
0.441367 0.782104 0.300361
[:, :, 2] =
0.345998 0.715015 0.148208
0.680498 0.407081 0.972467
0.84282 0.86965 0.623327
julia> B = ones(3,1,2)
3×1×2 Array{Float64, 3}:
[:, :, 1] =
1.0
1.0
1.0
[:, :, 2] =
1.0
1.0
1.0
julia> stack(*, eachslice(A,dims=3), eachslice(B,dims=3))
3×1×2 Array{Float64, 3}:
[:, :, 1] =
0.8116554260229542
0.7734490849576607
1.5238319690478463
[:, :, 2] =
1.2092213892542618
2.0600467731706105
2.335796748381668
I would contend that Julia provides other tools for handling "arrays of matrices" that make this pattern uncommon, if not un-idiomatic. In Julia, one could simply use collections of type Vector{Matrix{Float64}}
and then use broadcast with .\
. This would directly compare to a cell array of matrices in MATLAB and cellfun
. But if one wants everything in contiguous memory in Julia, then one can still use eachslice
and broadcast to achieve this, and (as the above poster points out) stack
to (re-)assemble the result into a higher dimensional array.
In any case, I would say that Julia already has "sufficiently" easy ways to handle the pagewise functions.
As for tensorprod
, this is slightly more tedious in native Julia because it sometimes requires intermediate reshaping and dimension permuting. However, if I recall correctly, the Tullio.jl
package can handle this with some really clear syntax and I imagine other tensor packages have nice ways of doing this, too. Tensor operations are beyond the scope of "conventional" linear algebra libraries, so I'm okay with this not living within the standard library. Most people doing interesting things with tensors are likely already using third-party packages.
Also, there seemed to be objections against LinearAlgebra
containing MultiLinearAlgebra
.
It's just something I noticed (in "Basic Arithmetic/Division"), and I didn't think through what it was. If it's multilinear algebra, then I'm ok with it out, and the issue closed. I'm still curious is this available in some package? I actually don't think the standard library needs to have feature-parity with Matlab's (let alone its toolboxes), but since there, at least document this/such similarities, pointing to some package? [Since this is relatively new in Matlab, it's obviously not used a lot.]
"Page-wise" matrix multiplication is NNlib.batched_mul
. That exists partly because this operation has efficient GPU implementations (faster than stack + eachslice solutions). For "pagetranspose" it has batched_transpose
(again matching what implementations support).
For tensor contractions,it looks at first glance that what tensorprod
allows is exactly what TensorOperations.jl allows.
I discovered (new in Matlab R2022a) under Basic Arithmetic/Division:
pagemldivide Page-wise left matrix divide pagemrdivide Page-wise right matrix divide
https://uk.mathworks.com/help/matlab/ref/pagemldivide.html
Do we need this for feature-parity, is it already in (or available in some package)?
Also new: https://uk.mathworks.com/help/matlab/ref/tensorprod.html
Only new as of R2020b: https://uk.mathworks.com/help/matlab/ref/pagemtimes.html
and pagetranspose and: https://uk.mathworks.com/help/matlab/ref/pagectranspose.html
And we might have htis: Custom Binary Functions
bsxfun