Open jtkrogel opened 6 years ago
Idea 1: Large scale systems with exactly known solutions and finite (but small) variance.
System 1: Hydrogen atoms, Gaussian basis sets A set of N ~infinitely separated hydrogen atoms represented in gaussian basis sets (including cusp correction), possibly with varying quality. In this case, the trial energy for a single hydrogen atom (E1_gauss) can be calculated analytically in this basis and the total energy is E_VMC = N x E1_gauss in VMC. In DMC, projection will formally reach the exact solution for each hydrogen atom (which is known trivially in textbooks) and in the small timestep limit E_DMC = N x E1_exact.
System 2: Non-interacting HEG, planewave and B-spline basis sets A HEG system of N electrons in planewave and/or B-spline basis, including a weak 1-body k-space Jastrow (or equivalently a weak twist that only shifts the occupied k-points). In this case, the trial energy can be calculated analytically for each value of N and VMC can be fully tested. DMC will formally converge to the Jastrow-free energy in the small timestep limit, which is also known analytically.
In combination, these tests could validate the correctness of the VMC and DMC algorithims in QMCPACK at very large scales and potentially expose numerical issues affecting correcntess that would surface only at these scales.
Areas left unchecked by these tests would include full pairwise coulomb interactions and other Jastrow factors. Presumably these could be subjected to deeper scrutiny in other ways (analytical correctness for a selections of electron positions) at scale.
We should target algorithmic and code paths that could be sensitive in different ways to our current small scale tests. e.g. Once the planewave code has been tested with tiling, there is complete coverage, whereas periodic Gaussians could have plenty of corner cases uncovered.
We should reproduce the results of this paper for the QMC part, then scale to many He copies:
In the context of #704 , at least changes in the results for large systems should be easily detectable, but long correctness checks will still be needed for validation.
These would still be worthwhile imo. Besides the test systems mentioned and things like AFQMC-DMC checks that we have published recently ( http://dx.doi.org/10.1103/PhysRevB.102.161104 ) , verifying we obtain the correct HF energy for a range of system sizes would be a good cross check to run regularly.
We can cook deterministic tests with 0 substeps, 1 step, 1 block. The the number should be fully deterministic. The only concern is initializing spline coefficients on 1 core can be time consuming.
We are missing some large scale Hartree-Fock-type tests. These would never be fast enough for CI, but could be run in nightlies. Perhaps these can be run via the QE workflow... at least if the current version of QE can do Hartree-Fock at scale. Another possibility would be to use PySCF / Gaussian-basis wavefunctions and use either directly or after splining. This would avoid storage problems.
This issue is meant as a discussion forum to accumulate ideas for test systems that could validate VMC and/or DMC for full algorithmic correctness for large scale systems with achievable computational cost.
The aim is to know that VMC and DMC are correct at these scales rather than knowing that our current implementation of the algorithms does not change under code commits.
Ideas discussed here do not necessarily represent plans to create such tests, but to accumulate ideas in the event that such tests are deemed necessary.