Closed ewenlawrence closed 4 months ago
Hi @ewenlawrence ,
The recent changes look really good, thanks for the continued work on this.
(I hadn't realised the values, counts
format of the degeneracy map variables lines up with the output of numpy functions like unique
; that's nice to have consistent.)
Regarding the tests, I presume there's going to be an example where the coupling operator does have degenerates, and then the output is tested of unique=true
vs unique=false
?
My only other thought is that it would be good to mention somewhere (e.g. in the first tutorial) what the option unqiue
does, even if we don't need an explicit example demonstrating it.
Perhaps mention me and Gerald once you think things are ready for a final check through.
Hi @piperfw,
Thanks for the comments, I agree that an extra test and mentioning in the tutorial would be good. I'll get around to that as soon as possible and mention you both when I push the changes to the PR.
Hi @piperfw and @gefux,
I've made the changes piper suggested. I went with adding to the quickstart tutorial, and just adding a small note mentioning the option should only affect the speed of the computation and explained the current default behaviour of being off. I'm not sure if the quickstart is the correct place to explain more in detail what is occurring in the backend. Any suggestions on that would be appreciated.
Another thing I realised is maybe this pull request should be going into the dev-0.5 branch rather than main, so that may need to be changed.
Hi, I've changed the target to dev-0.5
and ticked off modules/api.rst
on the checklist as they don't need updating.
For my own understanding, what exactly are the degeneracies being checked in either direction?
For a sigma_z
(2x2) coupling operator, _west_degeneracy_map=[1,2,0,1]
; does that mean even in this simple case there is some dimensional reduction (4 to 3)?
I am wondering if the action can be summarised simply in a single sentence or two for the tutorial. I think a brief (but precise) statement of what the option did would be better than just saying it may reduce computation time, test it. e.g.
Note with the option unique=True an attempt is made to simplify the calculation by checking for degneracies in the differences (and sums?) of the eigenvalues of bath coupling operator. This feature is currently in testing, but we recommend setting it on as it may significantly decrease the computation time without any loss of accuracy.
Hi,
So for the west degeneracy it is exactly what you state of the differences between eigenenergies of the coupling operator. In your example of sigma_z
this is equivalent to saying the action of up -> up
is the same as down->down
, noting that up->down
is not the same as down->up
due to a sign difference. As such as you state this reduces the internal west bond dimension from 4 to 3.
For the north direction me and peter are still not 100% clear with what it is checking, but it seems that the because time is involved it gets the extra condition of the anti commutator needing simultaneous degeneracy. I would guess this somehow relates to requiring time symmetry going northwards in the tensors, but I really have no way of backing up that statement currently.
Yeah that would actually work for an explanation in the tutorial. I guess this whole difference in directions makes me hesitant to give a precise statement, but in both directions it is at least true that it is checking the differences in eigenvalues. It is just in the north there is an extra constraint, but the difference in eigenvalues gets the general point across for the tutorial. I will update that quickly now and push that to the PR.
Hi that's great, thanks for making the changes and the explanation.
The simultaneous degeneracy of the sum and difference of eigenvalues is exactly what I would guess would be required in the other direction, purely from the expression Eq. 12 in the TEMPO paper (note one index involves only the differences in eigenvalues, the other the differences and the sum, in which case up,up is not the same as down,down for this example). I might be tempted to add a comment in the code to this effect; I'll check over everything tomorrow and let you know but everything looks good from here 👍
Had a look proper look through things now. I've added more verbose docstring/comments explaining the degeneracy maps as in lieu of a separate tutorial/explanation I think this is a good idea.
Regarding the tests, I noticed neither examples have degeneracies in the north direction. I think this is important to be included in a test, and further in a case with large degeneracies and so dimensional reduction. A related point: are we sure decimal=4 (i.e. 1e-4) is a reasonable level of accuracy for these tests?
I added a test with a large amount of degeneracies (north and west) in physics/large_degeneracy_test.py
(coupling operator is np.diag([1,1,-2])
). This follows the same comparison as in degeneracy_test.py
but for a longer time (t_end_C=5.0
). It fails with decimal=4
. I'm not saying this is necessarily an issue, but it doesn't seem sensible to have a physical test that is 'almost failing' (with a smaller end time). The assertion of no loss of accuracy in the tutorial may also need to be revised :).
Hi @piperfw,
Thanks for looking through this and updating the docstings/comments, those changes look good!
Yeah that's a good point about no north degeneracies. The 4 decimal places is just left over from from the previous tests, and I agree i'm not sure whether that is the correct level. I guess the issue is linked to the failing of the test as well. Even if reducing the bond dimensions leads to no theoretical loss of accuracy, the number of tensor contractions etc. changes, so the numerical error is different. I guess the best possible solution would be to check the result with large degeneracies against an exact solution so we can ensure the results have accuracy. I guess it's possible the convergence is different between the unique on and off as well, which could be coming into play here.
I would suggest as a temporary solution, we remove the no loss of accuracy claim as you mentioned, and state minimal loss to accuracy instead. That way this can be used as a faster but potentially less accurate feature. Let me know what you think would be the best approach here.
Hi, I agree with this approach. I've rephrased the tutorial text and made it flow better with the discussion of the error, let me know what you think.
I think if we are to make the degeneracy checking default behaviour, then we should have a better idea of what (if any) loss of accuracy it occurs, and comparisons using an exact model seem like a good way to establish this. But for the current implementation, I think it is fine as we don't have any reason to believe it should lead to significant error compared to the original calculation.
I've marked the large degeneracy test to be skipped but left the file for future reference (I'll make an issue for it).
P.S. I'm subscribed to notifications for this PR now apparently so don't worry about mentions @
Hi @ewenlawrence and @piperfw, It's great to see the progress and the discussion on this -- thank you both! I have a few comments:
Hi @gefux, thanks for the input here. A few responses:
Another test I was considering adding, but haven't decide quite where, is the edge case where the effective dimensions of the coupling operator are reduced to 1. I guess this may effectively be covered elsewhere (i.e. coupling of a bath to a one-dimensional system)...
Hi Piper,
That extra test sounds very interesting, and I think it is different enough from coupling to a 1d system, as we can pick a large system Hilbert space, and then have this coupling to a 1 dimensional coupling. I guess this is still fundamentally a degeneracy test, but could also go into the bath testing I guess.
Great Ewen, thanks! I did a quick check and I don't think coverage tests are technically needed, in the sense that the coverage % of your fork matches that of OQuPy main exactly, but it is probably good practice to write them anyway in case this is relying on the physics tests (which may change) providing coverage.
For the 1 dimensional coupling the identity can be used (or a non-Hermitian upper triangle matrix with a constant diagonal).
Hey both,
I've added some coverage test that should cover all the degeneracy code, and I also added an identity coupling test for the 1D coupling. Let me know if that is all good, or there's anything else that needs to be changed.
I'll have a chat with Peter tomorrow about a model with an exact solution we can use as a physics test for the large north degeneracies. If either of you think of one let me know as well, and we can get that added in to solve issue #115. In addition I'll start playing around with adding this degeneracy checking for PT-TEMPO this week, and I'll keep you both updated with how that goes. But I think to keep those points for a separate pull request based around issue #115, if you agree?
Hi Ewen,
Coverage all looks fine, I added a simple check that the length of the degeneracy maps equals the dimension squared in bath_test.py
.
That sounds really good on both fronts and yes I figure the PT-TEMPO version should at least be a separate PR. Keep us posted in #115!
@gefux let us know if you're happy for this to be merged in due course (after cleaning up commits etc.).
- I think that would be fine. An argument to keep it in is that, if it is a feature we think may significantly benefit a large number of typical use cases, then it's a good idea to include somewhere most visible, on page 1, where new users are most likely to see it. As a compromise (and since it has already been written) how about we keep it for now and review once the tutorial for Tutorial to TEMPO parameter choice & the dkmax anomaly #23 is actually complete? It may serve a helpful reminder for the person writing that as well, who presumably will want to review the discussion of the parameters in the quickstart tutorial already.
Yes, that makes sense to me.
@gefux let us know if you're happy for this to be merged in due course (after cleaning up commits etc.).
Yes, great. Thumbs up from me!
That's tidied and merged, congratulations @ewenlawrence !
Code adds degeneracy checking functionality for use in TEMPO, this reduces the dimension of the inner legs of the influence functionals. Note this is only implemented for TEMPO and not PT-TEMPO, so addresses half of issue #29.
Pull Request Check List
tox -e py36
(to runpytest
) the code tests.tox -e style
(to runpylint
) the code style tests.tox -e docs
(to runsphinx
) generate the documentation./docs/pages/modules.rst
has been updated./docs/pages/api.rst
has been updated.