Closed liwt31 closed 1 year ago
Patch coverage: 90.78
% and project coverage change: +0.96
:tada:
Comparison is base (
905b678
) 85.25% compared to head (df816a5
) 86.21%.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
excellent PR! The tree tensornetwork operator part deserves a new paper. I will quickly review the current code and leave some suggestions.
Circle CI is failing again. Will migrate to Github actions completely.
Supports multiple basis sets on the same site. Should be useful for finite-temperature algorithms, mode combinations and so on.
Will stick to the original roadmap after this "digression"
Add QN support for TTO/TTS and random TTS generation. Also, add the Davidson eigensolver and the primme eigensolver (configuration not supported yet).
Somehow the eigenvalue problem in TTN is more difficult to solve. The scipy solver fails to converge in the CI, and the Davidson solver will converge to wrong result on my machine, starting from multiple different initial guesses. The primme solver works pretty well though. The small $M$ could be the problem, but the scaling over $M$ is not very favorable so higher $M$ is not tested yet.
Add QN support for TTO/TTS and random TTS generation. Also, add the Davidson eigensolver and the primme eigensolver (configuration not supported yet).
Somehow the eigenvalue problem in TTN is more difficult to solve. The scipy solver fails to converge in the CI, and the Davidson solver will converge to wrong result on my machine, starting from multiple different initial guesses. The primme solver works pretty well though. The small M could be the problem, but the scaling over M is not very favorable so higher M is not tested yet.
It turns out to be a bug: the environment is not properly canonicalized. Fixed in the latest commit
Add VMF (imaginary) time evolution for tree tensor network. QN for VMF has not been implemented yet.
Matrix Unfolding is also not implemented.
I tested various algorithms for regularization of $S$, and adding $\epsilon$ to the diagonal elements and then calling scipy.linalg.pinv
seems to be the most robust one. The following ones will quickly result in NaN
scipy.linalg.inv
Although the methods above only yield very slightly (around 1e-10) different $\tilde{S}^{-1}$ from the implemented approach. Manually setting the first step size for the RK solver also helps sometimes. Overall the problem is rather complicated.
Add QN support for TTO/TTS and random TTS generation. Also, add the Davidson eigensolver and the primme eigensolver (configuration not supported yet).
Somehow the eigenvalue problem in TTN is more difficult to solve. The scipy solver fails to converge in the CI, and the Davidson solver will converge to wrong result on my machine, starting from multiple different initial guesses. The primme solver works pretty well though. The small M could be the problem, but the scaling over M is not very favorable so higher M is not tested yet.
This is actually why I use primme when I implemented the TDA algorithm. I found that primme is much more robust.
Add VMF (imaginary) time evolution for tree tensor network. QN for VMF has not been implemented yet. Matrix Unfolding is also not implemented. I tested various algorithms for regularization of S, and adding ϵ to the diagonal elements and then calling
scipy.linalg.pinv
seems to be the most robust one. The following ones will quickly result in NaN
- Add ϵexp(−S/ϵ) to S rather than ϵI
- Invert regularized S by inverting eigenvalues
- Invert regularized S by
scipy.linalg.inv
Although the methods above only yield very slightly (around 1e-10) different S~−1 from the implemented approach. Manually setting the first step size for the RK solver also helps sometimes. Overall the problem is rather complicated.
That's weird. Will a larger epsilon solve this problem? Maybe we should try diffeq.jl/diffeqpy, because it supports more solvers (e.g. stiffness auto detections).
Add VMF (imaginary) time evolution for tree tensor network. QN for VMF has not been implemented yet. Matrix Unfolding is also not implemented. I tested various algorithms for regularization of S, and adding ϵ to the diagonal elements and then calling
scipy.linalg.pinv
seems to be the most robust one. The following ones will quickly result in NaN
- Add ϵexp(−S/ϵ) to S rather than ϵI
- Invert regularized S by inverting eigenvalues
- Invert regularized S by
scipy.linalg.inv
Although the methods above only yield very slightly (around 1e-10) different S~−1 from the implemented approach. Manually setting the first step size for the RK solver also helps sometimes. Overall the problem is rather complicated.
That's weird. Will a larger epsilon solve this problem? Maybe we should try diffeq.jl/diffeqpy, because it supports more solvers (e.g. stiffness auto detections).
It turns out to be a bug - left out a conj
when doing regularization. Sorry for all the fuss
Support GPU acceleration. The logic is the same as MPS/MPO. All TTS/TTO matrices are stored as NumPy array and they are converted to CuPy matrices when performing contraction.
Rename TensorTreeState
and TensorTreeOperator
to TTNS
and TTNO
for better consistency with existing literature, such as
Add two-site tdvp-ps time evolution. Although I tried to implement the second-order symmetric Trotter splitting algorithm and the numerics are the same with the MPS implementation, the error is the same as first-order Trotter splitting algorithm. To test this, simply replace
# in MPS language: left to right sweep
local_steps1 = _tdvp_ps2_recursion_forward(ttns.root, ttns, ttno, tte, coeff, tau / 2)
# in MPS language: right to left sweep
local_steps2 = _tdvp_ps2_recursion_backward(ttns.root, ttns, ttno, tte, coeff, tau / 2)
with
# in MPS language: left to right sweep
local_steps1 = _tdvp_ps2_recursion_forward(ttns.root, ttns, ttno, tte, coeff, tau / 2)
# in MPS language: right to left sweep // the following line is replaced as a second left-to-right sweep
local_steps2 = _tdvp_ps2_recursion_forward(ttns.root, ttns, ttno, tte, coeff, tau / 2)
Not sure if the observation is correct and the reason behind if so. Maybe this is a feature of 2-site TDVP-PS algorithm?
The good news is, two-site TDVP-PS allows for incredibly large time step (see the test for details)
Implementation reference: Time evolution of ML-MCTDH wavefunctions. II. Application of the projector splitting integrator
The branch is at a steady state. I'm planning to test the branch on several production-level tasks and maybe do a thorough refactoring to meet production requirements. Comments are still welcome but the PR will be closed.
This is more like a preview than an actual PR. Feedbacks are welcome. Implemented features:
Please see the test cases for how to use them.
Obviously, there are a lot of important features that remain to be implemented. The following ones have a higher priority
After that, we may have a proper comparison/benchmark with (ML-)MCTDH
On the longer term I'm going to look at
Of course, one may imagine a lot of other features such as excited states optimization algorithm and technical enhancements like dumping to or loading from hard disk. In the foreseeable future, I'm afraid I'll only work on ones that will enable finite temperature mobility calculation due to my limited time.