Closed jofrevalles closed 1 year ago
Merging #34 (c5ed358) into master (f25179f) will decrease coverage by
0.02%
. The diff coverage isn/a
.
@@ Coverage Diff @@
## master #34 +/- ##
=========================================
- Coverage 0.50% 0.48% -0.02%
=========================================
Files 10 11 +1
Lines 596 613 +17
=========================================
Hits 3 3
- Misses 593 610 +17
Impacted Files | Coverage Δ | |
---|---|---|
src/Tenet.jl | 100.00% <0.00%> (ø) |
|
src/Tensor.jl | 0.00% <0.00%> (ø) |
|
src/Quantum.jl | 0.00% <0.00%> (ø) |
|
src/Differentiation.jl | 0.00% <0.00%> (ø) |
|
src/MatrixProductState.jl | 0.00% <0.00%> (ø) |
|
src/Numerics.jl | 0.00% <0.00%> (ø) |
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
Okey, fixed:
julia> using Tenet
julia> using GLMakie; using Makie
julia> tn = TensorNetwork([Tensor(rand([2, 2, 2]...), tuple([:x, :y, :z]...)), Tensor(rand([2, 2]...), tuple([:x, :y]...)), Tensor(rand([2]...), tuple([:x]...))])
TensorNetwork{Arbitrary}(#tensors=3, #inds=3)
julia> [tensor.labels for tensor in tn.tensors]
3-element Vector{Tuple{Symbol, Vararg{Symbol}}}:
(:x, :y, :z)
(:x, :y)
(:x,)
julia> plot(tn; labels=true)
Okay, now it works as expected:
julia> using Tenet
julia> using GLMakie; using Makie
julia> tn = TensorNetwork([Tensor(rand([2, 2, 2, 2]...), tuple([:x, :y, :z, :t]...)), Tensor(rand([2, 2]...), tuple([:x, :y]...)), Tensor(rand([2]...), tuple([:x]...))])
TensorNetwork{Arbitrary}(#tensors=3, #inds=4)
julia> plot(tn; labels=true)
I have added a dictionary named opencounter
which counts the iteration of each open index so we can add each label consecutively.
Summary
In this PR we address issue #32: we fix the plot functions so now we can visualize the open indices that a
TensorNetwork
may have. This is done by adding 'ghost' tensors of size zero at the edge of each open index.Example