ITensor / ITensors.jl

A Julia library for efficient tensor computations and tensor network calculations
https://itensor.org
Apache License 2.0
520 stars 120 forks source link

[BlockSparseArrays] BlockSparseArray functionality #1336

Open ogauthe opened 6 months ago

ogauthe commented 6 months ago

This issue lists functionalities and feature requests for BlockSparseArray.

using LinearAlgebra
using NDTensors.BlockSparseArrays: BlockSparseArray

a = BlockSparseArray{Float64}([2, 3], [2, 3]);

Feature requests/issues

[^blockarrays_v1]: https://github.com/ITensor/ITensors.jl/pull/1452, https://github.com/JuliaArrays/BlockArrays.jl/pull/255

mtfishman commented 6 months ago

Thanks. This currently works:

julia> using NDTensors.BlockSparseArrays: Block, BlockSparseArray, blocks

julia> using LinearAlgebra: I

julia> a = BlockSparseArray{Float64}([2, 2], [2, 2])
2×2-blocked 4×4 BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}}, Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}:
 0.0  0.0  │  0.0  0.0
 0.0  0.0  │  0.0  0.0
 ──────────┼──────────
 0.0  0.0  │  0.0  0.0
 0.0  0.0  │  0.0  0.0

julia> a[Block(2, 2)] = I(3)
3×3 Diagonal{Bool, Vector{Bool}}:
 1  ⋅  ⋅
 ⋅  1  ⋅
 ⋅  ⋅  1

julia> a
2×2-blocked 4×4 BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}}, Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}:
 0.0  0.0  │  0.0  0.0
 0.0  0.0  │  0.0  0.0
 ──────────┼──────────
 0.0  0.0  │  1.0  0.0
 0.0  0.0  │  0.0  1.0

julia> using NDTensors.SparseArrayInterface: stored_indices

julia> stored_indices(blocks(a))
1-element Dictionaries.MappedDictionary{CartesianIndex{2}, CartesianIndex{2}, NDTensors.SparseArrayInterface.var"#1#2"{NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}}}, Tuple{Dictionaries.Indices{CartesianIndex{2}}}}
 CartesianIndex(2, 2) │ CartesianIndex(2, 2)

though using this alternative syntax is currently broken:

julia> a = BlockSparseArray{Float64}([2, 2], [2, 2])
2×2-blocked 4×4 BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}}, Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}:
 0.0  0.0  │  0.0  0.0
 0.0  0.0  │  0.0  0.0
 ──────────┼──────────
 0.0  0.0  │  0.0  0.0
 0.0  0.0  │  0.0  0.0

julia> a[Block(2), Block(2)] = I(3)
ERROR: DimensionMismatch: tried to assign (3, 3) array to (2, 2) block
Stacktrace:
 [1] setindex!(::BlockSparseArray{…}, ::Diagonal{…}, ::Block{…}, ::Block{…})
   @ BlockArrays ~/.julia/packages/BlockArrays/L5yjb/src/abstractblockarray.jl:165
 [2] top-level scope
   @ REPL[30]:1
Some type information was truncated. Use `show(err)` to see complete types.

I would have to think about if it makes sense to support a[Block(2, 2)] .= 1.0 or a[Block(2), Block(2)] .= 1.0 if Block(2, 2) isn't initialized yet (probably we should support that).

mtfishman commented 6 months ago

In terms of svd and qr of BlockSparseMatrix, it is well defined to perform them block-wise (i.e. map the problem to factorizing the individual non-zero blocks) as long as there is only one block per row or column, i.e. diagonal up to block permutations of the rows and columns. So I was thinking we would have specialized block_svd and block_qr that checks if they have that structure and performs svd or qr block-wise, and otherwise it errors (it can check if blocks(a) is a generalized permutation matrix).

I have a protype of a QR decomposition of a BlockSparseMatrix here: https://github.com/ITensor/ITensors.jl/blob/v0.3.57/NDTensors/src/lib/BlockSparseArrays/src/backup/LinearAlgebraExt/qr.jl but it may be outdated since it was written based on an earlier version of BlockSparseArrays.

mtfishman commented 6 months ago

Also note that slicing like this should work right now:

a[Block(1, 1)[1:2, 1:2]]

i.e. you can take slices within a specified block. See BlockArrays.jl for a reference on that slicing notation.

ogauthe commented 6 months ago

new feature request: Base.:*(::BlockSparseArray, x::Number) and Base.:/(::BlockSparseArray, x::Number)

I updated the first comment.

ogauthe commented 6 months ago

new issue: 1im * a does not modify a data type if a is empty. It crashes if a contains data.

ogauthe commented 6 months ago

new issue: similar(a::BlockSparseArray, eltype::type) do not set new eltype. similar(a::BlockSparseArray, eltype::type, size) behavior depends on size type.

mtfishman commented 5 months ago

@ogauthe a number of these issues were fixed by #1332, I've updated the list in the first post accordingly. I added regression tests in #1360 for ones that still need to be fixed, and additionally added placeholder tests that I've marked as broken in the BlockSparseArrays tests. Please continue to update this post with new issues you find, and/or make PRs with broken behavior marked with @test_broken or bug fixes and regression tests. I think with the reorganization I did in #1332 these kinds of issues will be easier to fix going forward, though I have to admit it may be a bit tough to jump into that code right now since there are many abstraction layers involved, since the goal is for that code to be as minimal and general as possible.

ogauthe commented 4 months ago

Feature request: display(::Adjoint). The function Base.adjoint is well defined and returns an Adjoint type, but it throws an error at display. Closely related, it is not clear to me how GradedAxes.dual and BlockSparseArray should work together. The current behavior is to only accept a GradedUnitRange as an axis and to throw for a UnitRangeDual input. Base.adjoint does not dual the axis. I do not have an opinion yet whether this should be modified or not.

mtfishman commented 4 months ago

I think ideally BlockSparseArray would accept any AbstractUnitRange.

Alternatively, BlockArrays.jl introduced an AbstractBlockedUnitRange in https://github.com/JuliaArrays/BlockArrays.jl/pull/348, we could define a BlockedUnitRangeDual <: AbstractBlockedUnitRange type which wraps another AbstractBlockedUnitRange. I think it will require some experimentation to see which one leads to simpler code.

Good question about whether or not the axes should get dualed if Base.adjoint(::BlockSparseMatrix) is called. Probably we should dual the axes. For example, if we want operations like a' * a to work for a::BlockSparseMatrix and GradedUnitRange axes, I think we need to dual the axes.

ogauthe commented 4 months ago

The solution to accept any AbstractUnitRange is to replace the definition of Axes with

  Axes<:Tuple{Vararg{<:AbstractUnitRange,N}},

Then one can construct a BlockSparseArray with a dual axis. For a reason I do not really understand, the resulting array can be printed but not displayed. The error is not the same as when one tries to display a transpose or adoint.

g1 = gradedrange([U1(0) => 1])
m1 = BlockSparseArray{Float64}(dual(g1), g1,)

outputs

1×1-blocked 1×1 BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{UnitRangeDual{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}, BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}}, Tuple{UnitRangeDual{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}, BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}:
Error showing value of type BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{UnitRangeDual{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}, BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}}, Tuple{UnitRangeDual{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}, BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}:
ERROR: MethodError: no method matching NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(::Int64)

Closest candidates are:
  (::Type{NDTensors.LabelledNumbers.LabelledInteger{Value, Label}} where {Value<:Integer, Label})(::Any, ::Any)
   @ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/LabelledNumbers/src/labelledinteger.jl:2
  (::Type{T})(::T) where T<:Number
   @ Core boot.jl:792
  (::Type{T})(::BigFloat) where T<:Integer
   @ Base mpfr.jl:378
  ...

Stacktrace:
  [1] convert(::Type{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}, x::Int64)
    @ Base ./number.jl:7
  [2] cvt1
    @ ./essentials.jl:468 [inlined]
  [3] ntuple
    @ ./ntuple.jl:49 [inlined]
  [4] convert(::Type{Tuple{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, Int64}}, x::Tuple{Int64, Int64})
    @ Base ./essentials.jl:470
  [5] push!(a::Vector{Tuple{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, Int64}}, item::Tuple{Int64, Int64})
    @ Base ./array.jl:1118
  [6] alignment(io::IOContext{Base.TTY}, X::AbstractVecOrMat, rows::Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}, cols::Vector{Int64}, cols_if_complete::Int64, cols_otherwise::Int64, sep::Int64, ncols::Int64)
    @ Base ./arrayshow.jl:76
  [7] _print_matrix(io::IOContext{…}, X::AbstractVecOrMat, pre::String, sep::String, post::String, hdots::String, vdots::String, ddots::String, hmod::Int64, vmod::Int64, rowsA::UnitRange{…}, colsA::UnitRange{…})
    @ Base ./arrayshow.jl:207
  [8] print_matrix(io::IOContext{…}, X::BlockSparseArray{…}, pre::String, sep::String, post::String, hdots::String, vdots::String, ddots::String, hmod::Int64, vmod::Int64)
    @ Base ./arrayshow.jl:171
  [9] print_matrix
    @ ./arrayshow.jl:171 [inlined]
 [10] print_array
    @ ./arrayshow.jl:358 [inlined]
 [11] show(io::IOContext{Base.TTY}, ::MIME{Symbol("text/plain")}, X::BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}})
    @ Base ./arrayshow.jl:399
 [12] (::REPL.var"#55#56"{REPL.REPLDisplay{REPL.LineEditREPL}, MIME{Symbol("text/plain")}, Base.RefValue{Any}})(io::Any)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:273
 [13] with_repl_linfo(f::Any, repl::REPL.LineEditREPL)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:569
 [14] display(d::REPL.REPLDisplay, mime::MIME{Symbol("text/plain")}, x::Any)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:259
 [15] display
    @ ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:278 [inlined]
 [16] display(x::Any)
    @ Base.Multimedia ./multimedia.jl:340
 [17] #invokelatest#2
    @ ./essentials.jl:892 [inlined]
 [18] invokelatest
    @ ./essentials.jl:889 [inlined]
 [19] print_response(errio::IO, response::Any, show_value::Bool, have_color::Bool, specialdisplay::Union{Nothing, AbstractDisplay})
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:315
 [20] (::REPL.var"#57#58"{REPL.LineEditREPL, Pair{Any, Bool}, Bool, Bool})(io::Any)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:284
 [21] with_repl_linfo(f::Any, repl::REPL.LineEditREPL)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:569
 [22] print_response(repl::REPL.AbstractREPL, response::Any, show_value::Bool, have_color::Bool)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:282
 [23] (::REPL.var"#do_respond#80"{Bool, Bool, REPL.var"#93#103"{REPL.LineEditREPL, REPL.REPLHistoryProvider}, REPL.LineEditREPL, REPL.LineEdit.Prompt})(s::REPL.LineEdit.MIState, buf::Any, ok::Bool)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:911
 [24] #invokelatest#2
    @ ./essentials.jl:892 [inlined]
 [25] invokelatest
    @ ./essentials.jl:889 [inlined]
 [26] run_interface(terminal::REPL.Terminals.TextTerminal, m::REPL.LineEdit.ModalInterface, s::REPL.LineEdit.MIState)
    @ REPL.LineEdit ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/LineEdit.jl:2656
 [27] run_frontend(repl::REPL.LineEditREPL, backend::REPL.REPLBackendRef)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:1312
 [28] (::REPL.var"#62#68"{REPL.LineEditREPL, REPL.REPLBackendRef})()
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:386
Some type information was truncated. Use `show(err)` to see complete types.
mtfishman commented 4 months ago

Thanks for investigating. That seems like the right move to generalize the axes in that way. Hopefully that error is easy enough to circumvent.

ogauthe commented 4 months ago

I continue in exploring the effect of dual. eachindex is broken on such an array. This affects ==.

julia> eachindex(m1)
ERROR: MethodError: no method matching AbstractUnitRange{…}(::NDTensors.GradedAxes.UnitRangeDual{…})

Closest candidates are:
  AbstractUnitRange{T}(::AbstractUnitRange{T}) where T
   @ Base range.jl:1308
  AbstractUnitRange{T}(::NDTensors.LabelledNumbers.LabelledUnitRange) where T
   @ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/LabelledNumbers/src/labelledunitrange.jl:17
  AbstractUnitRange{Int64}(::Static.OptionallyStaticUnitRange)
   @ Static ~/.julia/packages/Static/6JeQI/src/ranges.jl:258
  ...

Stacktrace:
 [1] OrdinalRange{…}(r::NDTensors.GradedAxes.UnitRangeDual{…})
   @ Base ./range.jl:1313
 [2] convert(::Type{…}, r::NDTensors.GradedAxes.UnitRangeDual{…})
   @ Base ./range.jl:265
 [3] (::Base.IteratorsMD.var"#3#4")(r::NDTensors.GradedAxes.UnitRangeDual{…})
   @ Base.IteratorsMD ./multidimensional.jl:259
 [4] map
   @ ./tuple.jl:292 [inlined]
 [5] CartesianIndices(inds::Tuple{NDTensors.GradedAxes.UnitRangeDual{…}, BlockArrays.BlockedUnitRange{…}})
   @ Base.IteratorsMD ./multidimensional.jl:259
 [6] eachindex(::IndexCartesian, A::BlockSparseArray{…})
   @ Base.IteratorsMD ./multidimensional.jl:392
 [7] eachindex(A::BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}})
   @ Base ./abstractarray.jl:378
 [8] top-level scope
   @ REPL[37]:1
Some type information was truncated. Use `show(err)` to see complete types.
ogauthe commented 3 months ago

issue: I cannot write a slice of a block

a[BlockArrays.Block(1,1)][1:2,1:2] = ones((2,2))

does not write a. This is probably associated to "Fix initializing a block with broadcasting syntax".

ogauthe commented 3 months ago

issue: LinearAlgebra.norm(a) triggers AssertionError when a contains NaN

a[BlockArrays.Block(1,1)] = ones((2,2))
println(LinearAlgebra.norm(a))  # 2.0
a[BlockArrays.Block(1,1)][1, 1] = NaN
println(LinearAlgebra.norm(a[BlockArrays.Block(1,1)]))  # NaN
println(LinearAlgebra.norm(a))  # AssertionError

outputs

ERROR: AssertionError: op(output, (eltype(output))(f_notstored)) == output
Stacktrace:
  [1] #sparse_mapreduce#16
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/map.jl:118 [inlined]
  [2] sparse_mapreduce
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/map.jl:115 [inlined]
  [3] #mapreduce#29
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/map.jl:82 [inlined]
  [4] mapreduce
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/map.jl:81 [inlined]
  [5] generic_normInf
    @ ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/LinearAlgebra/src/generic.jl:453 [inlined]
  [6] normInf
    @ ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/LinearAlgebra/src/generic.jl:527 [inlined]
  [7] generic_norm2(x::BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}})
    @ LinearAlgebra ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/LinearAlgebra/src/generic.jl:463
  [8] norm2
    @ ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/LinearAlgebra/src/generic.jl:529 [inlined]
  [9] norm(itr::BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}}, p::Int64)
    @ LinearAlgebra ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/LinearAlgebra/src/generic.jl:598
 [10] _norm
    @ ~/.julia/packages/ArrayLayouts/i2w0I/src/ArrayLayouts.jl:297 [inlined]
 [11] norm
    @ ~/.julia/packages/ArrayLayouts/i2w0I/src/ArrayLayouts.jl:298 [inlined]
 [12] norm(A::BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}})
    @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/ArrayLayouts.jl:298
 [13] top-level scope
    @ REPL[1220]:1
Some type information was truncated. Use `show(err)` to see complete types.

I just checked that replacing NaN with Inf is correct.

ogauthe commented 3 months ago

issue: a block can be written with an invalid shape. An error should be raised.

a = BlockSparseArray{Float64}([2, 3], [2, 3])
println(size(a))  # (5,5)
b = BlockArrays.Block(1,1)
println(size(a[b]))  # (2,2)
a[b] = ones((3,3))
println(size(a))  # (5,5)
println(size(a[b]))  # (3,3)
ogauthe commented 3 months ago

Thanks to #1467, I can now initialize a BlockSparseArray with a dual axis. However contracting such arrays is broken:

using NDTensors.GradedAxes: GradedAxes, dual, gradedrange
using NDTensors.Sectors: U1
using NDTensors.BlockSparseArrays: BlockSparseArray

g1 = gradedrange([U1(0) => 1, U1(1) => 2, U1(2) => 3])
g2 = gradedrange([U1(0) => 2, U1(1) => 2, U1(3) => 1])

m1 = BlockSparseArray{Float64}(g1, GradedAxes.dual(g2));  # display crash
m2 = BlockSparseArray{Float64}(g2, GradedAxes.dual(g1));  # display crash

m12 = m1 * m2;  # MethodError
m21 = m2 * m1;  # MethodError
ERROR: MethodError: no method matching AbstractUnitRange{…}(::NDTensors.GradedAxes.UnitRangeDual{…})

Closest candidates are:
  AbstractUnitRange{T}(::AbstractUnitRange{T}) where T
   @ Base range.jl:1308
  AbstractUnitRange{T}(::NDTensors.LabelledNumbers.LabelledUnitRange) where T
   @ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/LabelledNumbers/src/labelledunitrange.jl:17
  AbstractUnitRange{Int64}(::Static.OptionallyStaticUnitRange)
   @ Static ~/.julia/packages/Static/6JeQI/src/ranges.jl:258
  ...

Stacktrace:
  [1] OrdinalRange{…}(r::NDTensors.GradedAxes.UnitRangeDual{…})
    @ Base ./range.jl:1313
  [2] convert(::Type{…}, r::NDTensors.GradedAxes.UnitRangeDual{…})
    @ Base ./range.jl:265
  [3] (::Base.IteratorsMD.var"#3#4")(r::NDTensors.GradedAxes.UnitRangeDual{…})
    @ Base.IteratorsMD ./multidimensional.jl:259
  [4] map
    @ ./tuple.jl:292 [inlined]
  [5] CartesianIndices(inds::Tuple{BlockArrays.BlockedUnitRange{…}, NDTensors.GradedAxes.UnitRangeDual{…}})
    @ Base.IteratorsMD ./multidimensional.jl:259
  [6] eachindex(::IndexCartesian, A::BlockSparseArray{…})
    @ Base.IteratorsMD ./multidimensional.jl:392
  [7] eachindex
    @ ./abstractarray.jl:378 [inlined]
  [8] fill!(A::BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}}, x::Float64)
    @ Base ./multidimensional.jl:1113
  [9] zero!(::BlockArrays.BlockLayout{…}, A::BlockSparseArray{…})
    @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/ArrayLayouts.jl:289
 [10] zero!(A::BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}})
    @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/ArrayLayouts.jl:288
 [11] _fill_copyto!(dest::BlockSparseArray{…}, C::FillArrays.Zeros{…})
    @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/muladd.jl:80
 [12] copyto!
    @ ~/.julia/packages/ArrayLayouts/i2w0I/src/muladd.jl:82 [inlined]
 [13] copy(M::ArrayLayouts.MulAdd{…})
    @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/muladd.jl:77
 [14] copy
    @ ~/.julia/packages/ArrayLayouts/i2w0I/src/mul.jl:130 [inlined]
 [15] materialize
    @ ~/.julia/packages/ArrayLayouts/i2w0I/src/mul.jl:127 [inlined]
 [16] mul
    @ ~/.julia/packages/ArrayLayouts/i2w0I/src/mul.jl:128 [inlined]
 [17] *(A::BlockSparseArray{…}, B::BlockSparseArray{…})
    @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/mul.jl:213
 [18] top-level scope
    @ REPL[13]:1
Some type information was truncated. Use `show(err)` to see complete types.
ogauthe commented 3 months ago

When no dual axis is involved, m' * m and m * m' trigger StackOverflowError

Stacktrace:
     [1] muladd!(α::Float64, A::BlockSparseArray{…}, B::LinearAlgebra.Adjoint{…}, β::Float64, C::BlockSparseArray{…}; kw::@Kwargs{})
       @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/muladd.jl:75
     [2] mul!(dest::BlockSparseArray{…}, A::BlockSparseArray{…}, B::LinearAlgebra.Adjoint{…}, α::Float64, β::Float64)
       @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/mul.jl:134
     [3] mul!(dest::BlockSparseArray{…}, A::BlockSparseArray{…}, B::LinearAlgebra.Adjoint{…}, α::Float64, β::Float64)
       @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/mul.jl:271
     [4] materialize!(m::ArrayLayouts.MulAdd{…})
       @ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/blocksparsearrayinterface/arraylayouts.jl:14
--- the last 4 lines are repeated 10900 more times ---
 [43605] muladd!(α::Float64, A::BlockSparseArray{…}, B::LinearAlgebra.Adjoint{…}, β::Float64, C::BlockSparseArray{…}; kw::@Kwargs{…})
       @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/muladd.jl:75
 [43606] copyto!
       @ ~/.julia/packages/ArrayLayouts/i2w0I/src/muladd.jl:82 [inlined]
 [43607] copy(M::ArrayLayouts.MulAdd{…})
       @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/muladd.jl:77
 [43608] copy
       @ ~/.julia/packages/ArrayLayouts/i2w0I/src/mul.jl:130 [inlined]
 [43609] materialize
       @ ~/.julia/packages/ArrayLayouts/i2w0I/src/mul.jl:127 [inlined]
 [43610] mul
       @ ~/.julia/packages/ArrayLayouts/i2w0I/src/mul.jl:128 [inlined]
 [43611] *(A::BlockSparseArray{…}, B::LinearAlgebra.Adjoint{…})
       @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/mul.jl:290
Some type information was truncated. Use `show(err)` to see complete types.
ogauthe commented 3 months ago

issue: display error when writing a block

using BlockArrays: BlockArrays
using NDTensors.BlockSparseArrays: BlockSparseArrays
using NDTensors.GradedAxes: GradedAxes
using NDTensors.Sectors: U1

g = GradedAxes.gradedrange([U1(0) => 1])
m = BlockSparseArrays.BlockSparseArray{Float64}(g, g)
m[BlockArrays.Block(1,1)] .= 1
1×1 view(::NDTensors.BlockSparseArrays.BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}, BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}}, Tuple{BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}, BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}, BlockSlice(Block(1),1:1), BlockSlice(Block(1),1:1)) with eltype Float64 with indices NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(1, U(1)[0]):NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(1, U(1)[0]):NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(1, U(1)[0])×NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(1, U(1)[0]):NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(1, U(1)[0]):NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(1, U(1)[0]):
Error showing value of type SubArray{Float64, 2, NDTensors.BlockSparseArrays.BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}, BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}}, Tuple{BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}, BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}, Tuple{BlockArrays.BlockSlice{BlockArrays.Block{1, Int64}, UnitRange{Int64}}, BlockArrays.BlockSlice{BlockArrays.Block{1, Int64}, UnitRange{Int64}}}, false}:
ERROR: MethodError: no method matching NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(::Int64)

Closest candidates are:
  (::Type{NDTensors.LabelledNumbers.LabelledInteger{Value, Label}} where {Value<:Integer, Label})(::Any, ::Any)
   @ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/LabelledNumbers/src/labelledinteger.jl:2
  (::Type{T})(::T) where T<:Number
   @ Core boot.jl:792
  (::Type{IntT})(::NDTensors.Block{1}) where IntT<:Integer
   @ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/blocksparse/block.jl:63
  ...

Stacktrace:
  [1] convert(::Type{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}, x::Int64)
    @ Base ./number.jl:7
  [2] cvt1
    @ ./essentials.jl:468 [inlined]
  [3] ntuple
    @ ./ntuple.jl:49 [inlined]
  [4] convert(::Type{Tuple{…}}, x::Tuple{Int64, Int64})
    @ Base ./essentials.jl:470
  [5] push!(a::Vector{Tuple{…}}, item::Tuple{Int64, Int64})
    @ Base ./array.jl:1118
  [6] alignment(io::IOContext{…}, X::AbstractVecOrMat, rows::Vector{…}, cols::Vector{…}, cols_if_complete::Int64, cols_otherwise::Int64, sep::Int64, ncols::Int64)
    @ Base ./arrayshow.jl:76
  [7] _print_matrix(io::IOContext{…}, X::AbstractVecOrMat, pre::String, sep::String, post::String, hdots::String, vdots::String, ddots::String, hmod::Int64, vmod::Int64, rowsA::UnitRange{…}, colsA::UnitRange{…})
    @ Base ./arrayshow.jl:207
  [8] print_matrix(io::IOContext{…}, X::SubArray{…}, pre::String, sep::String, post::String, hdots::String, vdots::String, ddots::String, hmod::Int64, vmod::Int64)
    @ Base ./arrayshow.jl:171
  [9] print_matrix
    @ ./arrayshow.jl:171 [inlined]
 [10] print_array
    @ ./arrayshow.jl:358 [inlined]
 [11] show(io::IOContext{…}, ::MIME{…}, X::SubArray{…})
    @ Base ./arrayshow.jl:399
 [12] (::REPL.var"#55#56"{REPL.REPLDisplay{REPL.LineEditREPL}, MIME{Symbol("text/plain")}, Base.RefValue{Any}})(io::Any)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:273
 [13] with_repl_linfo(f::Any, repl::REPL.LineEditREPL)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:569
 [14] display(d::REPL.REPLDisplay, mime::MIME{Symbol("text/plain")}, x::Any)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:259
 [15] display
    @ ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:278 [inlined]
 [16] display(x::Any)
    @ Base.Multimedia ./multimedia.jl:340
 [17] #invokelatest#2
    @ ./essentials.jl:892 [inlined]
 [18] invokelatest
    @ ./essentials.jl:889 [inlined]
 [19] print_response(errio::IO, response::Any, show_value::Bool, have_color::Bool, specialdisplay::Union{…})
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:315
 [20] (::REPL.var"#57#58"{REPL.LineEditREPL, Pair{Any, Bool}, Bool, Bool})(io::Any)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:284
 [21] with_repl_linfo(f::Any, repl::REPL.LineEditREPL)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:569
 [22] print_response(repl::REPL.AbstractREPL, response::Any, show_value::Bool, have_color::Bool)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:282
 [23] (::REPL.var"#do_respond#80"{…})(s::REPL.LineEdit.MIState, buf::Any, ok::Bool)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:911
 [24] #invokelatest#2
    @ ./essentials.jl:892 [inlined]
 [25] invokelatest
    @ ./essentials.jl:889 [inlined]
 [26] run_interface(terminal::REPL.Terminals.TextTerminal, m::REPL.LineEdit.ModalInterface, s::REPL.LineEdit.MIState)
    @ REPL.LineEdit ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/LineEdit.jl:2656
 [27] run_frontend(repl::REPL.LineEditREPL, backend::REPL.REPLBackendRef)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:1312
 [28] (::REPL.var"#62#68"{REPL.LineEditREPL, REPL.REPLBackendRef})()
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:386
Some type information was truncated. Use `show(err)` to see complete types.

This looks like the same error as previously triggered by dual axes.

mtfishman commented 3 months ago

Thanks for the report, looks like it is more generally a problem printing views of blocks of BlockSparseArray with GradedUnitRange axes:

using BlockArrays: Block
using NDTensors.BlockSparseArrays: BlockSparseArray
using NDTensors.GradedAxes: gradedrange
using NDTensors.Sectors: U1
r = gradedrange([U1(0) => 1])
a = BlockSparseArray{Float64}(r, r)
@view a[Block(1, 1)]
mtfishman commented 3 months ago

As with other related issues, this kind of thing will get fixed when I rewrite GradedAxes based on BlockArrays.jl v1. As a workaround for now I could just strip the sector labels from the GradedUnitRange when printing. Also if you need to print the block you can copy it, i.e. copy(@view(a[Block(1, 1)])) or a[Block(1, 1)] seem to print fine.

ogauthe commented 3 months ago

There is no real need for a quick fix just for display. It can wait for a rewrite.

ogauthe commented 3 months ago

issue: dual is not conserved over + and - operations

g = GradedAxes.gradedrange([U1(0) => 1])
m = BlockSparseArray{Float64}(dual(g), g)
GradedAxes.isdual(axes(m)[1])  # true
GradedAxes.isdual(axes(m + m)[1])  # false
GradedAxes.isdual(axes(m - m)[1])  # false
ogauthe commented 2 months ago

issue: slicing a view of a BlockSparseArray is ambiguous when a dual axis is involved as of main at 1bc091a03ca2b9fd5b205c785d01f4626c66d267

using BlockArrays: BlockArrays

using NDTensors.BlockSparseArrays: BlockSparseArrays
using NDTensors.GradedAxes: GradedAxes

struct U1
  n::Int
end
GradedAxes.dual(c::U1) = U1(-c.n)
g = GradedAxes.gradedrange([U1(0) => 1])

m1 = BlockSparseArrays.BlockSparseArray{Float64}(g, g)
m2 = BlockSparseArrays.BlockSparseArray{Float64}(GradedAxes.dual(g), g)
m3 = BlockSparseArrays.BlockSparseArray{Float64}(g, GradedAxes.dual(g))
m4 = BlockSparseArrays.BlockSparseArray{Float64}(GradedAxes.dual(g), GradedAxes.dual(g))

view(m1, BlockArrays.Block(1, 1))[1:1, 1:1]  # OK
view(m2, BlockArrays.Block(1, 1))[1:1, 1:1]  # MethodError
view(m3, BlockArrays.Block(1, 1))[1:1, 1:1]  # MethodError
view(m4, BlockArrays.Block(1, 1))[1:1, 1:1]  # MethodError
ERROR: MethodError: getindex(::BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, NDTensors.Sectors.U1{Int64}}}}, ::BlockArrays.BlockSlice{BlockArrays.BlockIndexRange{1, Tuple{BlockArrays.BlockSlice{BlockArrays.BlockRange{1, Tuple{UnitRange{Int64}}}, UnitRange{Int64}}}}, UnitRange{Int64}}) is ambiguous.

Candidates:
  getindex(b::BlockArrays.BlockedUnitRange, KR::BlockArrays.BlockSlice)
    @ BlockArrays ~/.julia/packages/BlockArrays/L5yjb/src/blockaxis.jl:289
  getindex(a::BlockArrays.BlockedUnitRange{BlockLasts} where BlockLasts<:(Vector{<:NDTensors.LabelledNumbers.LabelledInteger}), indices::AbstractUnitRange{<:Integer})
    @ NDTensors.GradedAxes ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/GradedAxes/src/gradedunitrange.jl:212
  getindex(a::BlockArrays.BlockedUnitRange{BlockLasts} where BlockLasts<:(Vector{<:NDTensors.LabelledNumbers.LabelledInteger}), indices)
    @ NDTensors.GradedAxes ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/GradedAxes/src/gradedunitrange.jl:208
  getindex(r::AbstractUnitRange, s::AbstractUnitRange{T}) where T<:Integer
    @ Base range.jl:986

Possible fix, define
  getindex(::BlockArrays.BlockedUnitRange{…} where BlockLasts<:(Vector{…}), ::BlockArrays.BlockSlice)

Stacktrace:
  [1] getindex(a::NDTensors.GradedAxes.UnitRangeDual{…}, indices::BlockArrays.BlockSlice{…})
    @ NDTensors.GradedAxes ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/GradedAxes/src/unitrangedual.jl:22
  [2] blockedunitrange_getindices(a::NDTensors.GradedAxes.UnitRangeDual{…}, indices::BlockArrays.BlockSlice{…})
    @ NDTensors.GradedAxes ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/GradedAxes/src/blockedunitrange.jl:40
  [3] blocksparse_unblock(a::NDTensors.BlockSparseArrays.BlockSparseArray{…}, inds::Tuple{…}, I::Tuple{…})
    @ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/wrappedabstractblocksparsearray.jl:81
  [4] unblock(a::NDTensors.BlockSparseArrays.BlockSparseArray{…}, inds::Tuple{…}, I::Tuple{…})
    @ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/wrappedabstractblocksparsearray.jl:91
  [5] blocksparse_to_indices(a::NDTensors.BlockSparseArrays.BlockSparseArray{…}, inds::Tuple{…}, I::Tuple{…})
    @ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/wrappedabstractblocksparsearray.jl:18
  [6] to_indices(a::NDTensors.BlockSparseArrays.BlockSparseArray{…}, inds::Tuple{…}, I::Tuple{…})
    @ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/wrappedabstractblocksparsearray.jl:36
  [7] blocksparse_to_indices(a::NDTensors.BlockSparseArrays.BlockSparseArray{…}, I::Tuple{…})
    @ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/wrappedabstractblocksparsearray.jl:23
  [8] to_indices(a::NDTensors.BlockSparseArrays.BlockSparseArray{…}, I::Tuple{…})
    @ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/wrappedabstractblocksparsearray.jl:67
  [9] _maybe_reindex
    @ ./subarray.jl:242 [inlined]
 [10] _maybe_reindex (repeats 2 times)
    @ ./subarray.jl:239 [inlined]
 [11] _maybe_reindex
    @ ./subarray.jl:233 [inlined]
 [12] unsafe_view
    @ ./subarray.jl:232 [inlined]
 [13] view
    @ ./subarray.jl:186 [inlined]
 [14] layout_getindex
    @ ~/.julia/packages/ArrayLayouts/i2w0I/src/ArrayLayouts.jl:134 [inlined]
 [15] getindex(::SubArray{…}, ::UnitRange{…}, ::UnitRange{…})
    @ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/wrappedabstractblocksparsearray.jl:127
 [16] top-level scope
    @ REPL[20]:1
Some type information was truncated. Use `show(err)` to see complete types.

same error for view(m2, BlockArrays.Block(1, 1)[1:1, 1:1])

ogauthe commented 2 months ago

So I finally understood the change in behavior that lead to my comment in #1491 and to #1493. With the new behavior introduced at de78e9fa45af3fae8470d85de2938be77ba09540, size(view(m1, BlockArrays.Block(1,1))[1:1,1:1]) is now a tuple of LabelledIntegers. I think this is misleading and should be a tuple of Int. That may fix the display error mentioned in #1491, not sure of it. #1493 needs to be fixed anyway.

typeof(size(view(m1, BlockArrays.Block(1, 1))[1:1, 1:1]))

# 1bc091a03ca2b9fd5b205c785d01f4626c66d267
Tuple{Int64, Int64}

# de78e9fa45af3fae8470d85de2938be77ba09540
Tuple{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1}, NDTensors.LabelledNumbers.LabelledInteger{Int64, U1}}
mtfishman commented 2 months ago

I'm not sure I would consider it "misleading", I think it could be reasonable to output the size as a tuple of LabelledInteger, I think the question is more why that causes issues. Mostly LabelledInteger should just act as another other Integer and ignore the label. If it is impractical/too annoying for the size to be LabelledInteger then we can remove the labels but let's investigate it a bit first.

ogauthe commented 2 months ago

I find it surprising that size(slice(view)) output type differs from any other combination of view and getindex, including e.g. size(view(m1, BlockArrays.Block(1, 1))[1:1,..]). However I agree that LabelledInteger can act as an Integer in most cases.

mtfishman commented 2 months ago

My view is that the block labels can get propagated as much as possible when it is unambiguous to do so, and if they get dropped it may be out of practicality because it is too complicated or ambiguous to propagate them or they get dropped deliberately because they cause some issue in code they propagate to. In the example you give, they get dropped because x[1:1, ...] instantiates a view and returns a concrete Array so it is reasonable for them to get dropped, though we could consider making an AbstractArray type with labelled axes at some point to propagate them further.

I could see an argument that size(view(a, I...)) and size(a[I...]) should always have the same type, maybe that is the argument you are trying to make. That could be a reasonable goal to shoot for but if the outputs act the same in generic code then for the time being that seems ok to me.

mtfishman commented 2 months ago

I think the issue is again related to the current design of GradedAxes and will be easier to fix once I upgrade to BlockArrays v1. The issue is basically that the elements of GradedUnitRange are LabelledInteger but it is forced to be a subtype of AbstractUnitRange{Int} because of the design of BlockArrays v0.x:

julia> r = gradedrange([U1(0) => 2, U1(1) => 2])

julia> r[1]
NDTensors.LabelledNumbers.LabelledInteger{Int64, U1}(1, U1(0))

julia> eltype(r)
NDTensors.LabelledNumbers.LabelledInteger{Int64, U1}

julia> r isa AbstractUnitRange{Int}
true

That leads to all sorts of issues in generic Julia code since they are inconsistent.

ogauthe commented 2 months ago

issue: TensorAlgebra.contract does not support views of BlockSparseArrays main at 141399083198d3022a5daaf6e7da3e37ad894c4d

using BlockArrays: BlockArrays
using NDTensors.BlockSparseArrays: BlockSparseArrays
using NDTensors.GradedAxes: GradedAxes
using NDTensors.TensorAlgebra: TensorAlgebra

ba = BlockArrays.BlockArray{Float64}(undef, [2],[2])
g = GradedAxes.gradedrange(['a' => 4])
bsa = BlockSparseArrays.BlockSparseArray{Float64}(g, g)
v = view(bsa, BlockArrays.Block(1,1))

f(a) = TensorAlgebra.contract(a, (1,2), ones((size(a,2),1)), (2,3))

f(view(ba, BlockArrays.Block(1,1)))  # Ok
f(Array(v))    # Ok
f(v)  # MethodError
ERROR: MethodError: no method matching NDTensors.BlockSparseArrays.BlockSparseStorage{NDTensors.BlockSparseArrays.BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{…}}}, Tuple{BlockArrays.BlockedUnitRange{Vector{…}}, BlockArrays.BlockedUnitRange{Vector{…}}}}}(::Vector{Union{Nothing, CartesianIndex{2}}}, ::Vector{Union{}})

Closest candidates are:
  (::Type{NDTensors.BlockSparseArrays.BlockSparseStorage{Arr}} where Arr<:NDTensors.BlockSparseArrays.AbstractBlockSparseArray)(::Any)
   @ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/sparsearrayinterface.jl:6

Stacktrace:
  [1] sparse_storage(a::SubArray{Float64, 2, NDTensors.BlockSparseArrays.BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{…}}}, Tuple{BlockArrays.BlockedUnitRange{Vector{…}}, BlockArrays.BlockedUnitRange{Vector{…}}}}, Tuple{BlockArrays.BlockSlice{BlockArrays.Block{1, Int64}, UnitRange{Int64}}, BlockArrays.BlockSlice{BlockArrays.Block{1, Int64}, UnitRange{Int64}}}, false})
    @ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/wrappers.jl:48
  [2] sparse_storage(a::PermutedDimsArray{Float64, 2, (1, 2), (1, 2), SubArray{Float64, 2, NDTensors.BlockSparseArrays.BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{…}, 2, NDTensors.BlockSparseArrays.BlockZero{…}}, Tuple{BlockArrays.BlockedUnitRange{…}, BlockArrays.BlockedUnitRange{…}}}, Tuple{BlockArrays.BlockSlice{BlockArrays.Block{…}, UnitRange{…}}, BlockArrays.BlockSlice{BlockArrays.Block{…}, UnitRange{…}}}, false}})
    @ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/wrappers.jl:12
  [3] storage_indices(a::PermutedDimsArray{Float64, 2, (1, 2), (1, 2), SubArray{Float64, 2, NDTensors.BlockSparseArrays.BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{…}, 2, NDTensors.BlockSparseArrays.BlockZero{…}}, Tuple{BlockArrays.BlockedUnitRange{…}, BlockArrays.BlockedUnitRange{…}}}, Tuple{BlockArrays.BlockSlice{BlockArrays.Block{…}, UnitRange{…}}, BlockArrays.BlockSlice{BlockArrays.Block{…}, UnitRange{…}}}, false}})
    @ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/indexing.jl:41
  [4] stored_indices(a::PermutedDimsArray{Float64, 2, (1, 2), (1, 2), SubArray{Float64, 2, NDTensors.BlockSparseArrays.BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{…}, 2, NDTensors.BlockSparseArrays.BlockZero{…}}, Tuple{BlockArrays.BlockedUnitRange{…}, BlockArrays.BlockedUnitRange{…}}}, Tuple{BlockArrays.BlockSlice{BlockArrays.Block{…}, UnitRange{…}}, BlockArrays.BlockSlice{BlockArrays.Block{…}, UnitRange{…}}}, false}})
    @ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/interface_optional.jl:26
  [5] _broadcast_getindex_evalf
    @ ./broadcast.jl:709 [inlined]
  [6] _broadcast_getindex
    @ ./broadcast.jl:682 [inlined]
  [7] (::Base.Broadcast.var"#31#32"{Base.Broadcast.Broadcasted{Base.Broadcast.Style{Tuple}, Nothing, typeof(NDTensors.SparseArrayInterface.stored_indices), Tuple{Tuple{PermutedDimsArray{Float64, 2, (1, 2), (1, 2), SubArray{Float64, 2, NDTensors.BlockSparseArrays.BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}}, Tuple{BlockArrays.BlockSlice{…}, BlockArrays.BlockSlice{…}}, false}}}}}})(k::Int64)
    @ Base.Broadcast ./broadcast.jl:1118
  [8] ntuple
    @ ./ntuple.jl:48 [inlined]
  [9] copy
    @ ./broadcast.jl:1118 [inlined]
 [10] materialize
    @ ./broadcast.jl:903 [inlined]
 [11] promote_stored_indices(as::PermutedDimsArray{Float64, 2, (1, 2), (1, 2), SubArray{Float64, 2, NDTensors.BlockSparseArrays.BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{…}, 2, NDTensors.BlockSparseArrays.BlockZero{…}}, Tuple{BlockArrays.BlockedUnitRange{…}, BlockArrays.BlockedUnitRange{…}}}, Tuple{BlockArrays.BlockSlice{BlockArrays.Block{…}, UnitRange{…}}, BlockArrays.BlockSlice{BlockArrays.Block{…}, UnitRange{…}}}, false}})
    @ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/map.jl:74
 [12] sparse_map_stored!(f::Function, a_dest::Matrix{Float64}, as::PermutedDimsArray{Float64, 2, (1, 2), (1, 2), SubArray{Float64, 2, NDTensors.BlockSparseArrays.BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}}, Tuple{BlockArrays.BlockSlice{…}, BlockArrays.BlockSlice{…}}, false}})
    @ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/map.jl:80
 [13] sparse_map!(::Base.Broadcast.DefaultArrayStyle{2}, f::Function, a_dest::Matrix{Float64}, as::PermutedDimsArray{Float64, 2, (1, 2), (1, 2), SubArray{Float64, 2, NDTensors.BlockSparseArrays.BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}}, Tuple{BlockArrays.BlockSlice{…}, BlockArrays.BlockSlice{…}}, false}})
    @ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/map.jl:101
 [14] sparse_map!(f::Function, a_dest::Matrix{Float64}, as::PermutedDimsArray{Float64, 2, (1, 2), (1, 2), SubArray{Float64, 2, NDTensors.BlockSparseArrays.BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}}, Tuple{BlockArrays.BlockSlice{…}, BlockArrays.BlockSlice{…}}, false}})
    @ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/map.jl:93
 [15] sparse_copyto!(dest::Matrix{Float64}, src::PermutedDimsArray{Float64, 2, (1, 2), (1, 2), SubArray{Float64, 2, NDTensors.BlockSparseArrays.BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{…}, 2, NDTensors.BlockSparseArrays.BlockZero{…}}, Tuple{BlockArrays.BlockedUnitRange{…}, BlockArrays.BlockedUnitRange{…}}}, Tuple{BlockArrays.BlockSlice{BlockArrays.Block{…}, UnitRange{…}}, BlockArrays.BlockSlice{BlockArrays.Block{…}, UnitRange{…}}}, false}})
    @ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/copyto.jl:8
 [16] sparse_permutedims!(dest::Matrix{Float64}, src::SubArray{Float64, 2, NDTensors.BlockSparseArrays.BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{…}, 2, NDTensors.BlockSparseArrays.BlockZero{…}}, Tuple{BlockArrays.BlockedUnitRange{…}, BlockArrays.BlockedUnitRange{…}}}, Tuple{BlockArrays.BlockSlice{BlockArrays.Block{…}, UnitRange{…}}, BlockArrays.BlockSlice{BlockArrays.Block{…}, UnitRange{…}}}, false}, perm::Tuple{Int64, Int64})
    @ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/copyto.jl:13
 [17] permutedims!
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/map.jl:97 [inlined]
 [18] permutedims
    @ ./permuteddimsarray.jl:145 [inlined]
 [19] _permutedims
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/BaseExtensions/permutedims.jl:16 [inlined]
 [20] fusedims(a::SubArray{Float64, 2, NDTensors.BlockSparseArrays.BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}}, Tuple{BlockArrays.BlockSlice{…}, BlockArrays.BlockSlice{…}}, false}, blockedperm::NDTensors.TensorAlgebra.BlockedPermutation{2, 2, Tuple{Tuple{…}, Tuple{…}}})
    @ NDTensors.TensorAlgebra ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/fusedims.jl:67
 [21] contract!(alg::NDTensors.BackendSelection.Algorithm{…}, a_dest::NDTensors.BlockSparseArrays.BlockSparseArray{…}, biperm_dest::NDTensors.TensorAlgebra.BlockedPermutation{…}, a1::SubArray{…}, biperm1::NDTensors.TensorAlgebra.BlockedPermutation{…}, a2::Matrix{…}, biperm2::NDTensors.TensorAlgebra.BlockedPermutation{…}, α::Bool, β::Bool)
    @ NDTensors.TensorAlgebra ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract_matricize/contract.jl:16
 [22] contract(alg::NDTensors.BackendSelection.Algorithm{:matricize, @NamedTuple{}}, biperm_dest::NDTensors.TensorAlgebra.BlockedPermutation{2, 2, Tuple{…}}, a1::SubArray{Float64, 2, NDTensors.BlockSparseArrays.BlockSparseArray{…}, Tuple{…}, false}, biperm1::NDTensors.TensorAlgebra.BlockedPermutation{2, 2, Tuple{…}}, a2::Matrix{Float64}, biperm2::NDTensors.TensorAlgebra.BlockedPermutation{2, 2, Tuple{…}}, α::Bool; kwargs::@Kwargs{})
    @ NDTensors.TensorAlgebra ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:118
 [23] contract(alg::NDTensors.BackendSelection.Algorithm{:matricize, @NamedTuple{}}, biperm_dest::NDTensors.TensorAlgebra.BlockedPermutation{2, 2, Tuple{…}}, a1::SubArray{Float64, 2, NDTensors.BlockSparseArrays.BlockSparseArray{…}, Tuple{…}, false}, biperm1::NDTensors.TensorAlgebra.BlockedPermutation{2, 2, Tuple{…}}, a2::Matrix{Float64}, biperm2::NDTensors.TensorAlgebra.BlockedPermutation{2, 2, Tuple{…}}, α::Bool)
    @ NDTensors.TensorAlgebra ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:107
 [24] contract(alg::NDTensors.BackendSelection.Algorithm{:matricize, @NamedTuple{}}, labels_dest::Tuple{Int64, Int64}, a1::SubArray{Float64, 2, NDTensors.BlockSparseArrays.BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}}, Tuple{BlockArrays.BlockSlice{…}, BlockArrays.BlockSlice{…}}, false}, labels1::Tuple{Int64, Int64}, a2::Matrix{Float64}, labels2::Tuple{Int64, Int64}, α::Bool; kwargs::@Kwargs{})
    @ NDTensors.TensorAlgebra ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:88
 [25] contract(alg::NDTensors.BackendSelection.Algorithm{:matricize, @NamedTuple{}}, labels_dest::Tuple{Int64, Int64}, a1::SubArray{Float64, 2, NDTensors.BlockSparseArrays.BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}}, Tuple{BlockArrays.BlockSlice{…}, BlockArrays.BlockSlice{…}}, false}, labels1::Tuple{Int64, Int64}, a2::Matrix{Float64}, labels2::Tuple{Int64, Int64}, α::Bool)
    @ NDTensors.TensorAlgebra ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:77
 [26] contract(alg::NDTensors.BackendSelection.Algorithm{:matricize, @NamedTuple{}}, a1::SubArray{Float64, 2, NDTensors.BlockSparseArrays.BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}}, Tuple{BlockArrays.BlockSlice{…}, BlockArrays.BlockSlice{…}}, false}, labels1::Tuple{Int64, Int64}, a2::Matrix{Float64}, labels2::Tuple{Int64, Int64}, α::Bool; kwargs::@Kwargs{})
    @ NDTensors.TensorAlgebra ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:45
 [27] contract
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:35 [inlined]
 [28] #contract#31
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:32 [inlined]
 [29] contract (repeats 2 times)
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:23 [inlined]
 [30] f(a::SubArray{Float64, 2, NDTensors.BlockSparseArrays.BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{…}}}, Tuple{BlockArrays.BlockedUnitRange{Vector{…}}, BlockArrays.BlockedUnitRange{Vector{…}}}}, Tuple{BlockArrays.BlockSlice{BlockArrays.Block{1, Int64}, UnitRange{Int64}}, BlockArrays.BlockSlice{BlockArrays.Block{1, Int64}, UnitRange{Int64}}}, false})
    @ Main ./REPL[7]:1
 [31] top-level scope
    @ REPL[10]:1
Some type information was truncated. Use `show(err)` to see complete types.

I think this is more a BlockSparseArrays issue than a TensorAlgebra one, so I put it here.

ogauthe commented 2 months ago

One gets a different error after reshaping the view:

x = reshape(v, (16,1))  # Ok
f(reshape(v, (16,1)))  # ERROR: Not implemented
ERROR: Not implemented
Stacktrace:
  [1] error(s::String)
    @ Base ./error.jl:35
  [2] sparse_storage(a::Base.ReshapedArray{Float64, 2, SubArray{…}, Tuple{…}})
    @ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/interface.jl:6
  [3] sparse_storage(a::PermutedDimsArray{Float64, 2, (1, 2), (1, 2), Base.ReshapedArray{Float64, 2, SubArray{…}, Tuple{…}}})
    @ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/wrappers.jl:12
  [4] storage_indices(a::PermutedDimsArray{Float64, 2, (1, 2), (1, 2), Base.ReshapedArray{Float64, 2, SubArray{…}, Tuple{…}}})
    @ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/indexing.jl:41
  [5] stored_indices(a::PermutedDimsArray{Float64, 2, (1, 2), (1, 2), Base.ReshapedArray{Float64, 2, SubArray{…}, Tuple{…}}})
    @ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/interface_optional.jl:26
  [6] _broadcast_getindex_evalf
    @ ./broadcast.jl:709 [inlined]
  [7] _broadcast_getindex
    @ ./broadcast.jl:682 [inlined]
  [8] (::Base.Broadcast.var"#31#32"{Base.Broadcast.Broadcasted{…}})(k::Int64)
    @ Base.Broadcast ./broadcast.jl:1118
  [9] ntuple
    @ ./ntuple.jl:48 [inlined]
 [10] copy
    @ ./broadcast.jl:1118 [inlined]
 [11] materialize
    @ ./broadcast.jl:903 [inlined]
 [12] promote_stored_indices(as::PermutedDimsArray{Float64, 2, (1, 2), (1, 2), Base.ReshapedArray{Float64, 2, SubArray{…}, Tuple{…}}})
    @ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/map.jl:74
 [13] sparse_map_stored!(f::Function, a_dest::Matrix{Float64}, as::PermutedDimsArray{Float64, 2, (1, 2), (1, 2), Base.ReshapedArray{…}})
    @ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/map.jl:80
 [14] sparse_map!(::Base.Broadcast.DefaultArrayStyle{…}, f::Function, a_dest::Matrix{…}, as::PermutedDimsArray{…})
    @ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/map.jl:101
 [15] sparse_map!(f::Function, a_dest::Matrix{Float64}, as::PermutedDimsArray{Float64, 2, (1, 2), (1, 2), Base.ReshapedArray{…}})
    @ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/map.jl:93
 [16] sparse_copyto!(dest::Matrix{Float64}, src::PermutedDimsArray{Float64, 2, (1, 2), (1, 2), Base.ReshapedArray{…}})
    @ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/copyto.jl:8
 [17] sparse_permutedims!(dest::Matrix{Float64}, src::Base.ReshapedArray{Float64, 2, SubArray{…}, Tuple{…}}, perm::Tuple{Int64, Int64})
    @ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/copyto.jl:13
 [18] permutedims!
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/map.jl:97 [inlined]
 [19] permutedims
    @ ./permuteddimsarray.jl:145 [inlined]
 [20] _permutedims
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/BaseExtensions/permutedims.jl:16 [inlined]
 [21] fusedims(a::Base.ReshapedArray{…}, blockedperm::NDTensors.TensorAlgebra.BlockedPermutation{…})
    @ NDTensors.TensorAlgebra ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/fusedims.jl:67
 [22] contract!(alg::NDTensors.BackendSelection.Algorithm{…}, a_dest::NDTensors.BlockSparseArrays.BlockSparseArray{…}, biperm_dest::NDTensors.TensorAlgebra.BlockedPermutation{…}, a1::Base.ReshapedArray{…}, biperm1::NDTensors.TensorAlgebra.BlockedPermutation{…}, a2::Matrix{…}, biperm2::NDTensors.TensorAlgebra.BlockedPermutation{…}, α::Bool, β::Bool)
    @ NDTensors.TensorAlgebra ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract_matricize/contract.jl:16
 [23] contract(alg::NDTensors.BackendSelection.Algorithm{…}, biperm_dest::NDTensors.TensorAlgebra.BlockedPermutation{…}, a1::Base.ReshapedArray{…}, biperm1::NDTensors.TensorAlgebra.BlockedPermutation{…}, a2::Matrix{…}, biperm2::NDTensors.TensorAlgebra.BlockedPermutation{…}, α::Bool; kwargs::@Kwargs{})
    @ NDTensors.TensorAlgebra ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:118
 [24] contract(alg::NDTensors.BackendSelection.Algorithm{…}, biperm_dest::NDTensors.TensorAlgebra.BlockedPermutation{…}, a1::Base.ReshapedArray{…}, biperm1::NDTensors.TensorAlgebra.BlockedPermutation{…}, a2::Matrix{…}, biperm2::NDTensors.TensorAlgebra.BlockedPermutation{…}, α::Bool)
    @ NDTensors.TensorAlgebra ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:107
 [25] contract(alg::NDTensors.BackendSelection.Algorithm{…}, labels_dest::Tuple{…}, a1::Base.ReshapedArray{…}, labels1::Tuple{…}, a2::Matrix{…}, labels2::Tuple{…}, α::Bool; kwargs::@Kwargs{})
    @ NDTensors.TensorAlgebra ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:88
 [26] contract(alg::NDTensors.BackendSelection.Algorithm{…}, labels_dest::Tuple{…}, a1::Base.ReshapedArray{…}, labels1::Tuple{…}, a2::Matrix{…}, labels2::Tuple{…}, α::Bool)
    @ NDTensors.TensorAlgebra ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:77
 [27] contract(alg::NDTensors.BackendSelection.Algorithm{…}, a1::Base.ReshapedArray{…}, labels1::Tuple{…}, a2::Matrix{…}, labels2::Tuple{…}, α::Bool; kwargs::@Kwargs{})
    @ NDTensors.TensorAlgebra ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:45
 [28] contract
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:35 [inlined]
 [29] #contract#31
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:32 [inlined]
 [30] contract (repeats 2 times)
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:23 [inlined]
 [31] f(a::Base.ReshapedArray{Float64, 2, SubArray{…}, Tuple{…}})
    @ Main ./REPL[9]:1
 [32] top-level scope
    @ REPL[14]:1
Some type information was truncated. Use `show(err)` to see complete types.
mtfishman commented 2 months ago

Thanks for the reports. You may know already, but for the time being you can always copy(...) to instantiate those views. Of course a more permanent solution is to fix those operations so you don't have to call copy(...).

mtfishman commented 2 months ago

Something else to consider trying is using @strided from Strided.jl to flatten down wrappers into a StridedView, which just wraps contiguous data as well as strides into that data:

julia> using Strided

julia> a = @view randn(2, 2)[1:2, 1:2]
2×2 view(::Matrix{Float64}, 1:2, 1:2) with eltype Float64:
 -1.24471    0.829222
  0.0962156  0.167975

julia> s = @strided a
2×2 StridedView{Float64, 2, Matrix{Float64}, typeof(identity)}:
 -1.24471    0.829222
  0.0962156  0.167975

julia> strides(s)
(1, 2)

julia> a = reshape(a, 2, 2)
2×2 reshape(view(::Matrix{Float64}, 1:2, 1:2), 2, 2) with eltype Float64:
 -1.24471    0.829222
  0.0962156  0.167975

julia> s = @strided a
2×2 StridedView{Float64, 2, Matrix{Float64}, typeof(identity)}:
 -1.24471    0.829222
  0.0962156  0.167975

julia> strides(s)
(1, 2)

So you can see Base Julia keeps adding on layers of wrapper types, which becomes difficult for us and Base operations to dispatch on properly and see that they are all strided underneath. @strided wraps the array in a single StridedView wrapper which circumvents that issue.

So you could perform some slicing/reshaping operations and then before calling TensorAlgebra.contract you could convert the wrapped array to a StridedView with @strided (or first convert to a StridedView with @strided and then perform slicing/reshaping operations, since Strided.jl will try to preserve it as a StridedView by just updating the strides). I would be curious if TensorAlgebra.contract "just works" when a StridedView is passed to it. I think it will because it just relies on a few operations like permutedims, reshape, and matrix/vector multiplication through mul! to work and I know those are all defined for StridedView in Strided.jl.

mtfishman commented 2 months ago

I realized that @strided may not know how to handle slicing with Block but it should be easy enough to overload StridedViews.StridedView(::SubArray{<:Any,<:Any,<:BlockSparseArray,<:BlockSlice}) (or something like that) to make that work.

ogauthe commented 2 months ago

Thanks for the tip, @strided can be used with BlockArrays.BlockArray or BlockArrays.BlockedArray but indeed there is a bit of work needed to use it with BlockSparseArrays.BlockSparseArray.

ba = BlockArrays.BlockArray{Float64}(undef, [2,1],[2,1])
v = view(ba, BlockArrays.Block(1,1))
s = @strided v
println(strides(s))  # (1,2)

blocked = BlockArrays.PseudoBlockArray(ones((4,4)), [2,2], [2,2])
v = view(blocked, BlockArrays.Block(1,1))
s = @strided v
println(strides(s))  # (1,4)

g = GradedAxes.gradedrange([U1(1) => 2])
bsa = BlockSparseArrays.BlockSparseArray{Float64}(g, g)
v = view(bsa, BlockArrays.Block(1,1))
@strided v  # MethodError
mtfishman commented 2 months ago

That would be good to fix since I think that could help a lot with these kinds of cases, we're using it right now with a lot of success in the most technical parts of the current abelian index combiner/fusion code in NDTensors in order to permute slices of blocks. Converting to StridedView has the added advantage that Jutho wrote a custom permutedims(::StridedView, perm) implementation that is faster than Julia's Base permutedims(::AbstractArray, perm) function, and on top of that it can fuse additions/subtractions of multiple permuted arrays along with applications of elementwise functions into a single operation without intermediate. Additionally, it is multithreaded, while the Base version is not.

mtfishman commented 2 months ago

There still may be an issue around how to handle blocks that aren't instantiated/stored yet, maybe we need to rethink that design a little bit... One possibility would be that in certain operations (such as inside of @strided), if you have a block view @view a[Block(1, 1)], if that block doesn't exist yet it could instantiate it and then take a view of that newly instantiated block. That ensures the view is really pointing to a chunk of data that exists in the BlockSparseArray. I don't particularly like that approach since it may be slightly too tricky or surprising since then it will modify a in-place and also it seems arbitrary which operations may instantiate blocks or not but it is a starting point to think about a better design.

Probably it is better to make that behavior opt-in, in which case we could define a convenient syntax for it, for example:

# View version
@view! a[Block(1, 1)]
# Or equivalently:
view!(a, Block(1, 1))

which does:

blocks(a)[1, 1] = blocks(a)[1, 1]
blocks(a)[1, 1]

i.e. it first sets the block and then returns a view of that block data directly. That is similar to the function Base.get!.

mtfishman commented 2 months ago

@emstoudenmire I think this is related to a discussion we had about defining a DefaultDictionary type similar to the DataStructures.DefaultDict type and using that as the data structure of SparseArrayDOK, since SparseArrayDOK is basically just a dictionary mapping stored array indices to values along with a default zero value and an overall size. This could probably be implemented at the level of SparseArrayDOKs/SparseArrayInterface through a function getindex! which gets the value if it is stored and if it doesn't it sets the value to the default zero element of the sparse array and returns that zero element. A simple implementation would be:

function getindex!(a::AbstractArray, index...)
  value = a[index...]
  a[index...] = value
  return value
end

This works because the first call to a[index...] either gets the value if it is stored or returns the default zero value for that position in the array.

It could be nice to have a macro syntax for getindex! such as @get! a[index...], @set! a[index...], @default! a[index...] or something like that, I'm having trouble coming up with a good name for that.

emstoudenmire commented 2 months ago

It would be a really useful feature. For an interface, what about just:

x = get!(a, i)

which is very similar to the existing get! method but lets you opt out of providing a default value, since the default has already been pre-defined. Would that cause any ambiguity issues with the existing get! method?

mtfishman commented 2 months ago

That's definitely an interface we could consider. I don't think it technically causes an ambiguity issue, though we would not want to overload Base.get!(a::AbstractArray, i) since that would be type piracy so we could only define it on our own types, which maybe makes it a bit less powerful/general.

mtfishman commented 2 months ago

I also worry it is a slight abuse of notation, since it is a slightly different meaning from Base.get!.

Interestingly, Base.getindex(d::DataStructures.DefaultDict, key) is actually defined as get!(d, key, default_value(d)), i.e. https://github.com/JuliaCollections/DataStructures.jl/blob/v0.18.15/src/default_dict.jl#L63. That seems pretty extreme/opinionated to me since then in the case of SparseArrayDOK if you have a broadcasting or reduction operation that touches all of the elements then it would actually instantiate every element of the array in memory, i.e. make it effectively dense, which would be a big issue for large arrays since you could easily accidentally run out of memory.

mtfishman commented 2 months ago

Another consideration would be to define a more general macro @! which converts all operations it sees to their in-place versions. That is similar to this package: https://github.com/simonbyrne/InplaceOps.jl.

So then the syntax would be:

@! a[i, j] # getindex!(a, i, j)
@! @view a[i, j] # view!(a, i, j)
@! view(a, i, j) # view!(a, i, j)

That would be a compelling reason to use getindex! since then it is clear from a syntax level that it is the in-place version of getindex, and the macro could even make use of that and just change all functions f(...) that it sees to f!(...).

That would be useful in other places as well, such as when constructing OpSums, we could use @! annotations to turn this out-of-place version:

o = OpSum()
for j in 1:10
  o += "X", j
end

into an in-place version:

o = OpSum()
@! for j in 1:10
  o += "X", j
end
emstoudenmire commented 2 months ago

I like the idea of that macro for things like OpSum. Is the name @! already emerging as a standard, or what about @inplace which would be more self-documenting?

For the specific case of getting array / dictionary / tensor values, I wonder if the notation becomes too terse. Like if a reader of the code would understand that the "in-place-ness" there is meaning that default will be set if the element/block is missing versus more general in-place functions that might modify data for other reasons (like e.g. sort!(arr) where it's more obvious what the in-place operation is that happens). The contrast I'm making is versus some other name like @maybe_grow or something like @get_or_set_default.

mtfishman commented 2 months ago

This package: https://github.com/davidavdav/InplaceLinalg.jl uses @inplace. I was also considering a name like @maybe_set but I was trying to come up with something shorter and also more general. ! is of course standard notation for in-place operations so @! seems pretty self-explanatory to me but I could see an argument for using a more descriptive name.

EDIT: See also:

mtfishman commented 2 months ago

For now though we could go with view! and getindex! for the sake of expediency and think about more convenient syntax built on top of that later.

ogauthe commented 2 months ago

issue: there are compatibility issues between BlockSparseArrays and BlockArrays v1.1.

I upgraded to v1.1 on the non-abelian_fusion branch, the CI lists the failed tests https://github.com/ITensor/ITensors.jl/actions/runs/9568584523/job/26379147331?pr=1363

ogauthe commented 2 months ago

There is still an issue with views of blocks: main at ab8a59e20e4f8ac009a8234aced2e805b9f253fd

g = GradedAxes.gradedrange(['a' => 1])
bsa = BlockSparseArrays.BlockSparseArray{Float64}(g,  g)
bsa[1, 1] = 1.0
b1 = view(bsa, BlockArrays.Block(1,1))  # block b1 has been initialized before
TensorAlgebra.contract(b1, (1, 2), ones((1, 1)), (2, 3))  # ArgumentError
ERROR: ArgumentError: No fallback for applying `zerovector!` to (values of) type `NDTensors.BlockSparseArrays.BlockSparseStorage{NDTensors.BlockSparseArrays.BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{Base.OneTo{Int64}, Base.OneTo{Int64}}}}, Tuple{Base.OneTo{Int64}, Base.OneTo{Int64}}}}` could be determined
Stacktrace:
  [1] zerovector!(x::NDTensors.BlockSparseArrays.BlockSparseStorage{NDTensors.BlockSparseArrays.BlockSparseArray{…}})
    @ VectorInterface ~/.julia/packages/VectorInterface/1L754/src/fallbacks.jl:46
  [2] sparse_zerovector!(a::NDTensors.BlockSparseArrays.BlockSparseArray{…})
    @ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/vectorinterface.jl:26
  [3] sparse_zero!(a::NDTensors.BlockSparseArrays.BlockSparseArray{…})
    @ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/base.jl:64
  [4] sparse_map_stored!(f::Function, a_dest::NDTensors.BlockSparseArrays.BlockSparseArray{…}, as::PermutedDimsArray{…})
    @ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/map.jl:79
  [5] sparse_map!(::Base.Broadcast.DefaultArrayStyle{…}, f::Function, a_dest::NDTensors.BlockSparseArrays.BlockSparseArray{…}, as::PermutedDimsArray{…})
    @ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/map.jl:101
  [6] sparse_map!(f::Function, a_dest::NDTensors.BlockSparseArrays.BlockSparseArray{…}, as::PermutedDimsArray{…})
    @ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/map.jl:93
  [7] sparse_copyto!(dest::NDTensors.BlockSparseArrays.BlockSparseArray{…}, src::PermutedDimsArray{…})
    @ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/copyto.jl:8
  [8] sparse_permutedims!(dest::NDTensors.BlockSparseArrays.BlockSparseArray{…}, src::Base.ReshapedArray{…}, perm::Tuple{…})
    @ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/copyto.jl:13
  [9] permutedims!(a_dest::NDTensors.BlockSparseArrays.BlockSparseArray{…}, a_src::Base.ReshapedArray{…}, perm::Tuple{…})
    @ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/map.jl:97
 [10] _permutedims!(a_dest::NDTensors.BlockSparseArrays.BlockSparseArray{…}, a_src::Base.ReshapedArray{…}, perm::Tuple{…})
    @ NDTensors.TensorAlgebra.BaseExtensions ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/BaseExtensions/permutedims.jl:6
 [11] splitdims!(a_dest::NDTensors.BlockSparseArrays.BlockSparseArray{…}, a::Base.ReshapedArray{…}, blockedperm::NDTensors.TensorAlgebra.BlockedPermutation{…})
    @ NDTensors.TensorAlgebra ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/splitdims.jl:66
 [12] contract!(alg::NDTensors.BackendSelection.Algorithm{…}, a_dest::NDTensors.BlockSparseArrays.BlockSparseArray{…}, biperm_dest::NDTensors.TensorAlgebra.BlockedPermutation{…}, a1::SubArray{…}, biperm1::NDTensors.TensorAlgebra.BlockedPermutation{…}, a2::Matrix{…}, biperm2::NDTensors.TensorAlgebra.BlockedPermutation{…}, α::Bool, β::Bool)
    @ NDTensors.TensorAlgebra ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract_matricize/contract.jl:19
 [13] contract(alg::NDTensors.BackendSelection.Algorithm{…}, biperm_dest::NDTensors.TensorAlgebra.BlockedPermutation{…}, a1::SubArray{…}, biperm1::NDTensors.TensorAlgebra.BlockedPermutation{…}, a2::Matrix{…}, biperm2::NDTensors.TensorAlgebra.BlockedPermutation{…}, α::Bool; kwargs::@Kwargs{})
    @ NDTensors.TensorAlgebra ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:118
 [14] contract(alg::NDTensors.BackendSelection.Algorithm{…}, biperm_dest::NDTensors.TensorAlgebra.BlockedPermutation{…}, a1::SubArray{…}, biperm1::NDTensors.TensorAlgebra.BlockedPermutation{…}, a2::Matrix{…}, biperm2::NDTensors.TensorAlgebra.BlockedPermutation{…}, α::Bool)
    @ NDTensors.TensorAlgebra ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:107
 [15] contract(alg::NDTensors.BackendSelection.Algorithm{…}, labels_dest::Tuple{…}, a1::SubArray{…}, labels1::Tuple{…}, a2::Matrix{…}, labels2::Tuple{…}, α::Bool; kwargs::@Kwargs{})
    @ NDTensors.TensorAlgebra ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:88
 [16] contract(alg::NDTensors.BackendSelection.Algorithm{…}, labels_dest::Tuple{…}, a1::SubArray{…}, labels1::Tuple{…}, a2::Matrix{…}, labels2::Tuple{…}, α::Bool)
    @ NDTensors.TensorAlgebra ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:77
 [17] contract(alg::NDTensors.BackendSelection.Algorithm{…}, a1::SubArray{…}, labels1::Tuple{…}, a2::Matrix{…}, labels2::Tuple{…}, α::Bool; kwargs::@Kwargs{})
    @ NDTensors.TensorAlgebra ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:45
 [18] contract
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:35 [inlined]
 [19] #contract#31
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:32 [inlined]
 [20] contract
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:23 [inlined]
 [21] contract(a1::SubArray{…}, labels1::Tuple{…}, a2::Matrix{…}, labels2::Tuple{…})
    @ NDTensors.TensorAlgebra ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:23
 [22] top-level scope
    @ REPL[52]:1
Some type information was truncated. Use `show(err)` to see complete types.

Note that this is fine:

bsa2 = BlockSparseArrays.BlockSparseArray{Float64}(g,  g)
b2 = BlockSparseArrays.view!(bsa2, BlockArrays.Block(1, 1))
TensorAlgebra.contract(b2, (1, 2), ones((1, 1)), (2, 3))  # Ok

I am surprised that view and view! have different output types, I was expecting view! to be "initialize a block if it is missing and return a view of it"

mtfishman commented 2 months ago

Yes, I didn't fix the view/@view version yet, you can see it is still listed as an issue in the first post.

The latest design of view/@view for taking a view of a Block was implemented in: https://github.com/ITensor/ITensors.jl/pull/1481. As of that PR it creates a SubArray wrapping the block sparse array along with the block being viewed. That design was chosen because it makes it easier to handle the fact that blocks may or may not exist. If the block you are taking a view of doesn't exist, there isn't any data to access/return, so if you want to be able to modify that block it needs to be a wrapper around the block sparse array that points to the location of the non-existent block being viewed in some way or another.

view!/@view! is able to always output the block data directly since it instantiates the block if it doesn't exist, so the output type can be simpler (i.e. it can just be an Array in standard cases).

It would be possible for view/@view to either output the block data directly if the block exists or output a wrapper around the array if it doesn't exist which points towards the location of the non-existent block, I'm trying that out now. I didn't do that so far since I didn't like the idea that view/@view might output different types depending on if the Block is stored or not, that seemed a bit weird to me.

ogauthe commented 2 months ago

I see, thank you for the explanation. I guess I should just use view! everywhere.

I was confused by the error message, but if this is expected to work at some point there is no need to change it.

mtfishman commented 2 months ago

Right, the goal of view!/@view! was to be a stopgap solution that outputs something that is more guaranteed to work in a variety of functions (at the expense of allocating unallocated blocks that are accessed), while we continue to improve support for the more complicated types that get output by view/@view.

ogauthe commented 2 months ago

issue: BlockSparseArrays.block_stored_indices does not transpose indices for a LinearAlgebra.Adjoint{T, NDTensors.BlockSparseArrays.BlockSparseArray This is the origin of the display bug of an adjoint, it also triggers issues in my own code.

gr = GradedAxes.dual(GradedAxes.gradedrange([U1(1) => 1]))
gc = GradedAxes.gradedrange([U1(0) => 1, U1(1) => 1])
m = BlockSparseArrays.BlockSparseArray{Float64}(gr, gc)
m[1, 2] = 1

existing_blocks = BlockSparseArrays.block_stored_indices(m)
@show existing_blocks  # {CartesianIndex(1, 2) = Block(1, 2)}
col_sectors = GradedAxes.blocklabels(axes(m, 2))
existing_sectors = [col_sectors[it[2]] for it in eachindex(existing_blocks)]  # Ok

mh = adjoint(m)   # display error, due to present issue
existing_blocks = BlockSparseArrays.block_stored_indices(mh)
@show existing_blocks  # {CartesianIndex(1, 2) = Block(2, 1)}  THIS IS WRONG
col_sectors = GradedAxes.blocklabels(axes(mh, 2))
existing_sectors = [col_sectors[it[2]] for it in eachindex(existing_blocks)]  # raises BoundError