shuaigroup / Renormalizer

Quantum dynamics package based on tensor network states
https://shuaigroup.github.io/Renormalizer/
Apache License 2.0
52 stars 16 forks source link

[WIP] Basic tree tensornetwork algorithm #149

Closed liwt31 closed 1 year ago

liwt31 commented 1 year ago

This is more like a preview than an actual PR. Feedbacks are welcome. Implemented features:

Please see the test cases for how to use them.

Obviously, there are a lot of important features that remain to be implemented. The following ones have a higher priority

After that, we may have a proper comparison/benchmark with (ML-)MCTDH

On the longer term I'm going to look at

Of course, one may imagine a lot of other features such as excited states optimization algorithm and technical enhancements like dumping to or loading from hard disk. In the foreseeable future, I'm afraid I'll only work on ones that will enable finite temperature mobility calculation due to my limited time.

codecov[bot] commented 1 year ago

Codecov Report

Patch coverage: 90.78% and project coverage change: +0.96 :tada:

Comparison is base (905b678) 85.25% compared to head (df816a5) 86.21%.

Additional details and impacted files ```diff @@ Coverage Diff @@ ## master #149 +/- ## ========================================== + Coverage 85.25% 86.21% +0.96% ========================================== Files 105 116 +11 Lines 9989 11687 +1698 ========================================== + Hits 8516 10076 +1560 - Misses 1473 1611 +138 ``` | [Impacted Files](https://codecov.io/gh/shuaigroup/Renormalizer/pull/149?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=shuaigroup) | Coverage Δ | | |---|---|---| | [renormalizer/mps/mpdm.py](https://codecov.io/gh/shuaigroup/Renormalizer/pull/149?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=shuaigroup#diff-cmVub3JtYWxpemVyL21wcy9tcGRtLnB5) | `90.00% <ø> (-0.11%)` | :arrow_down: | | [renormalizer/lib/davidson/logger.py](https://codecov.io/gh/shuaigroup/Renormalizer/pull/149?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=shuaigroup#diff-cmVub3JtYWxpemVyL2xpYi9kYXZpZHNvbi9sb2dnZXIucHk=) | `55.28% <55.28%> (ø)` | | | [renormalizer/lib/davidson/davidson.py](https://codecov.io/gh/shuaigroup/Renormalizer/pull/149?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=shuaigroup#diff-cmVub3JtYWxpemVyL2xpYi9kYXZpZHNvbi9kYXZpZHNvbi5weQ==) | `70.98% <68.60%> (+0.56%)` | :arrow_up: | | [renormalizer/tn/gs.py](https://codecov.io/gh/shuaigroup/Renormalizer/pull/149?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=shuaigroup#diff-cmVub3JtYWxpemVyL3RuL2dzLnB5) | `70.78% <70.78%> (ø)` | | | [renormalizer/tn/treebase.py](https://codecov.io/gh/shuaigroup/Renormalizer/pull/149?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=shuaigroup#diff-cmVub3JtYWxpemVyL3RuL3RyZWViYXNlLnB5) | `89.28% <89.28%> (ø)` | | | [renormalizer/tn/time\_evolution.py](https://codecov.io/gh/shuaigroup/Renormalizer/pull/149?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=shuaigroup#diff-cmVub3JtYWxpemVyL3RuL3RpbWVfZXZvbHV0aW9uLnB5) | `92.39% <92.39%> (ø)` | | | [renormalizer/tn/tree.py](https://codecov.io/gh/shuaigroup/Renormalizer/pull/149?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=shuaigroup#diff-cmVub3JtYWxpemVyL3RuL3RyZWUucHk=) | `97.60% <97.60%> (ø)` | | | [renormalizer/tn/hop\_expr.py](https://codecov.io/gh/shuaigroup/Renormalizer/pull/149?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=shuaigroup#diff-cmVub3JtYWxpemVyL3RuL2hvcF9leHByLnB5) | `97.91% <97.91%> (ø)` | | | [renormalizer/tn/node.py](https://codecov.io/gh/shuaigroup/Renormalizer/pull/149?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=shuaigroup#diff-cmVub3JtYWxpemVyL3RuL25vZGUucHk=) | `98.68% <98.68%> (ø)` | | | [renormalizer/\_\_init\_\_.py](https://codecov.io/gh/shuaigroup/Renormalizer/pull/149?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=shuaigroup#diff-cmVub3JtYWxpemVyL19faW5pdF9fLnB5) | `95.00% <100.00%> (ø)` | | | ... and [12 more](https://codecov.io/gh/shuaigroup/Renormalizer/pull/149?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=shuaigroup) | |

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.

jjren commented 1 year ago

excellent PR! The tree tensornetwork operator part deserves a new paper. I will quickly review the current code and leave some suggestions.

liwt31 commented 1 year ago

Circle CI is failing again. Will migrate to Github actions completely.

liwt31 commented 1 year ago

Supports multiple basis sets on the same site. Should be useful for finite-temperature algorithms, mode combinations and so on.

Will stick to the original roadmap after this "digression"

liwt31 commented 1 year ago

Add QN support for TTO/TTS and random TTS generation. Also, add the Davidson eigensolver and the primme eigensolver (configuration not supported yet).

Somehow the eigenvalue problem in TTN is more difficult to solve. The scipy solver fails to converge in the CI, and the Davidson solver will converge to wrong result on my machine, starting from multiple different initial guesses. The primme solver works pretty well though. The small $M$ could be the problem, but the scaling over $M$ is not very favorable so higher $M$ is not tested yet.

liwt31 commented 1 year ago

Add QN support for TTO/TTS and random TTS generation. Also, add the Davidson eigensolver and the primme eigensolver (configuration not supported yet).

Somehow the eigenvalue problem in TTN is more difficult to solve. The scipy solver fails to converge in the CI, and the Davidson solver will converge to wrong result on my machine, starting from multiple different initial guesses. The primme solver works pretty well though. The small M could be the problem, but the scaling over M is not very favorable so higher M is not tested yet.

It turns out to be a bug: the environment is not properly canonicalized. Fixed in the latest commit

liwt31 commented 1 year ago

Add VMF (imaginary) time evolution for tree tensor network. QN for VMF has not been implemented yet. Matrix Unfolding is also not implemented. I tested various algorithms for regularization of $S$, and adding $\epsilon$ to the diagonal elements and then calling scipy.linalg.pinv seems to be the most robust one. The following ones will quickly result in NaN

Although the methods above only yield very slightly (around 1e-10) different $\tilde{S}^{-1}$ from the implemented approach. Manually setting the first step size for the RK solver also helps sometimes. Overall the problem is rather complicated.

jjren commented 1 year ago

Add QN support for TTO/TTS and random TTS generation. Also, add the Davidson eigensolver and the primme eigensolver (configuration not supported yet).

Somehow the eigenvalue problem in TTN is more difficult to solve. The scipy solver fails to converge in the CI, and the Davidson solver will converge to wrong result on my machine, starting from multiple different initial guesses. The primme solver works pretty well though. The small M could be the problem, but the scaling over M is not very favorable so higher M is not tested yet.

This is actually why I use primme when I implemented the TDA algorithm. I found that primme is much more robust.

jjren commented 1 year ago

Add VMF (imaginary) time evolution for tree tensor network. QN for VMF has not been implemented yet. Matrix Unfolding is also not implemented. I tested various algorithms for regularization of S, and adding ϵ to the diagonal elements and then calling scipy.linalg.pinv seems to be the most robust one. The following ones will quickly result in NaN

  • Add ϵexp⁡(−S/ϵ) to S rather than ϵI
  • Invert regularized S by inverting eigenvalues
  • Invert regularized S by scipy.linalg.inv

Although the methods above only yield very slightly (around 1e-10) different S~−1 from the implemented approach. Manually setting the first step size for the RK solver also helps sometimes. Overall the problem is rather complicated.

That's weird. Will a larger epsilon solve this problem? Maybe we should try diffeq.jl/diffeqpy, because it supports more solvers (e.g. stiffness auto detections).

liwt31 commented 1 year ago

Add VMF (imaginary) time evolution for tree tensor network. QN for VMF has not been implemented yet. Matrix Unfolding is also not implemented. I tested various algorithms for regularization of S, and adding ϵ to the diagonal elements and then calling scipy.linalg.pinv seems to be the most robust one. The following ones will quickly result in NaN

  • Add ϵexp⁡(−S/ϵ) to S rather than ϵI
  • Invert regularized S by inverting eigenvalues
  • Invert regularized S by scipy.linalg.inv

Although the methods above only yield very slightly (around 1e-10) different S~−1 from the implemented approach. Manually setting the first step size for the RK solver also helps sometimes. Overall the problem is rather complicated.

That's weird. Will a larger epsilon solve this problem? Maybe we should try diffeq.jl/diffeqpy, because it supports more solvers (e.g. stiffness auto detections).

It turns out to be a bug - left out a conj when doing regularization. Sorry for all the fuss

liwt31 commented 1 year ago

Support GPU acceleration. The logic is the same as MPS/MPO. All TTS/TTO matrices are stored as NumPy array and they are converted to CuPy matrices when performing contraction.

liwt31 commented 1 year ago

Rename TensorTreeState and TensorTreeOperator to TTNS and TTNO for better consistency with existing literature, such as

liwt31 commented 1 year ago

Add two-site tdvp-ps time evolution. Although I tried to implement the second-order symmetric Trotter splitting algorithm and the numerics are the same with the MPS implementation, the error is the same as first-order Trotter splitting algorithm. To test this, simply replace

# in MPS language: left to right sweep
local_steps1 = _tdvp_ps2_recursion_forward(ttns.root, ttns, ttno, tte, coeff, tau / 2)
# in MPS language: right to left sweep
local_steps2 = _tdvp_ps2_recursion_backward(ttns.root, ttns, ttno, tte, coeff, tau / 2)

with

# in MPS language: left to right sweep
local_steps1 = _tdvp_ps2_recursion_forward(ttns.root, ttns, ttno, tte, coeff, tau / 2)
# in MPS language: right to left sweep  // the following line is replaced as a second left-to-right sweep
local_steps2 = _tdvp_ps2_recursion_forward(ttns.root, ttns, ttno, tte, coeff, tau / 2)

Not sure if the observation is correct and the reason behind if so. Maybe this is a feature of 2-site TDVP-PS algorithm?

The good news is, two-site TDVP-PS allows for incredibly large time step (see the test for details)

Implementation reference: Time evolution of ML-MCTDH wavefunctions. II. Application of the projector splitting integrator

liwt31 commented 1 year ago

The branch is at a steady state. I'm planning to test the branch on several production-level tasks and maybe do a thorough refactoring to meet production requirements. Comments are still welcome but the PR will be closed.