Closed AidanGG closed 2 weeks ago
Hello @AidanGG! Thanks for opening this PR. We checked the lines you've touched for PEP 8 issues, and found:
quimb/tensor/tensor_1d.py
:Line 2366:80: E501 line too long (81 > 79 characters) Line 2367:80: E501 line too long (80 > 79 characters) Line 2387:5: E303 too many blank lines (2) Line 2390:80: E501 line too long (87 > 79 characters) Line 2494:80: E501 line too long (83 > 79 characters) Line 2524:80: E501 line too long (86 > 79 characters) Line 2617:80: E501 line too long (80 > 79 characters) Line 2916:80: E501 line too long (82 > 79 characters) Line 2933:80: E501 line too long (87 > 79 characters)
Merging #60 into develop will decrease coverage by
1.76%
. The diff coverage is14.64%
.
@@ Coverage Diff @@
## develop #60 +/- ##
===========================================
- Coverage 86.23% 84.47% -1.77%
===========================================
Files 32 32
Lines 8684 8882 +198
===========================================
+ Hits 7489 7503 +14
- Misses 1195 1379 +184
Impacted Files | Coverage Δ | |
---|---|---|
quimb/tensor/tensor_1d.py | 73.94% <14.64%> (-9.92%) |
:arrow_down: |
quimb/tensor/optimize_pytorch.py | 0.00% <0.00%> (-75.33%) |
:arrow_down: |
quimb/evo.py | 99.20% <0.00%> (+<0.01%) |
:arrow_up: |
quimb/linalg/slepc_linalg.py | 93.51% <0.00%> (+1.17%) |
:arrow_up: |
quimb/tensor/optimize_autograd.py | 87.43% <0.00%> (+50.26%) |
:arrow_up: |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update 74f946f...25c25a4. Read the comment docs.
Hi @AidanGG. Possibly I am missing something but a bit confused about what's been added here. The LPS is a double row 1D operator (i.e. shouldn't mixin methods from TensorNetwork1DVector
or TensorNetwork1DFlat
), whereas from what I can tell this is maybe something like an MPO.
I think what would be helpful would be an example of the top level functionality and design that would be useful to you. E.g.
lps = qtn.LocallyPurifiedState.rand(100, bond_dim=8)
lps.show()
| | | | | |
O--O--O--O--O--O--
| | | | | | ... etc
O--O--O--O--O--O--
| | | | | |
lps.normalize_()
lps.gate_(G, where)
G_expectation = lps.trace()
Then start by implementing just the minimal functionality that achieves this (basically reverse back from the top level functionality). Having such a practical goal motivates the code and can be the initial unit test etc.
I'm writing some docs up at the moment that describes in much greater detail how various bits of the quimb tensor stuff is designed that may be helpful, as I appreciate it might not be super clear at the moment!
Yes, I have planned to have the LPS only store one of the two rows. In that way, the shape of the LPS resembles an MPO, but only one side of the open indices are physical indices (to which gates can be applied), and the other side are Kraus indices which may be of different size (c.f. MPOs where both sides are physical indices that should be of matching sizes, possibly permuted). So in a sense, the LPS is closer to an MPS, but with a Kraus index on each tensor, which is why I decided to extend TensorNetwork1DVector
.
Doing this should allow me to reuse the gate_TN_1D
without any major modifications. When a gate is applied to a single sided LPS, the whole density matrix is transformed correctly when contracting over the Kraus indices with its conjugate.
So when dealing with expectation values I also thought that something like
expec_TN_1D(lps, mpo1, mpo2, lps.H)
would just contract the Kraus indices between lps
and lps.H
.
Ah OK yes that makes a lot more sense - essentially an MPS with ancillae index. I think it might be worth being explicit, e.g. including in the name, that this class itself is only half the LPS. I say that as it is not just the methods that define the TN object but also the tensors and network structure it contains.
Thoughts on calling this object MPSAncillae
or something? Then the LPS could be formed (if necessary) as :
def to_lps(mpsa):
ket = mpsa
bra = mpsa.H
bra.site_ind_id = 'b{}'
return ket & bra
etc. And methods that form this LPS intermediate can be explicitly named as such: mpsa.to_dense_lps()
.
Hi Johnnie, I'm happy to rename it. I've been slightly busy with other commitments so progress on this PR might be a bit slow, sorry.
Following our discussion on Gitter, I've begun initial work on LPSs (also known as MPDOs). Still need to implement where
NotImplementedError
is raised, and I don't yet know how subsystem lognegativity works for LPSs.I believe the top-level module functions and the functions in
TensorNetwork1DVector
work without modifications, but obviously tests are still required for basically everything.