BoothGroup / Vayesta

A Python package for wave function-based quantum embedding
Apache License 2.0
33 stars 8 forks source link

Renormalised coulomb interaction calculation #31

Closed cjcscott closed 2 years ago

cjcscott commented 2 years ago

This adds the calculation of renormalised coulomb interactions, according to the RPA approach, into Vayesta. This is also added into DMET calculations, though currently only with an FCI solver due to complications to the pyscf CC ERI objects, under the renorm_interaction option.

For other approaches, there is a simple function that takes the mean-field object (used as reference for an RPA calculation) and the list of fragments, with initialised cluster objects, and returns equivalent lists of eris to use for interactions and energy calculation. Note here that the latter are just the standard, bare coulomb interactions within the cluster, as currently this seems the safest approach to calculate energies.

Explicitly, we have

from vayesta.core.renorm_coulomb import get_renorm_coulomb_interaction

interaction_eris, energy_eris, deltae_rpa = get_renorm_coulomb_interaction(self.mf, self.fragments, calc_deltae=False)
for i, f in enumerate(fragments):
        f.kernel(ieris=interaction_eris[i], eeris=energy_eris[i])
...

then calculation proceeds as before.

deltae_rpa is an energy correction for nonlocal effects at the level of RPA, but currently is only compatible with CAS fragmentations. Full exploration of this is to come once we've considered the interactions themselves.

Various previously-calculated eri objects, both local and global, can also be passed into the function to avoid recalculation.

To get around the current restriction to FCI @maxnus has suggested he might do some stuff with our interface to pyscf's CC functionality, and I've been talking to @obackhouse about interfacing to ebcc.

Tests etc still to be added once we're more finalised on the structure.

Let me know what you think!

maxnus commented 2 years ago

@cjcscott The unscreened ERIs loc_eris are only needed to add them to the screening at the end right? In this case I think we should remove this parameter and have the function return only the screening "W-V"

maxnus commented 2 years ago

Nevermind, they are not added, but the (ov|ov), (ov|vo), (vo|ov) and (vo|vo) blocks are replaced.

maxnus commented 2 years ago

I did a bit of refactoring and some design choices are different. Instead of eeris (energy) and ieris (interactions), I opted to keep eris for the bare interactions and introduced seris for screened interactions. The build_screened_eris method only returns the (ov|ov) block of the screened eris, as this block is at least 16 times smaller than the full ERIs - and in particular if we want to use this for _ChemistERIs for CCSD later on, it's actually simpler to keep only this block. A full array of screened ERIs is generated just-in-time, within the DMET fragment kernel. The screened (ov|ov) blocks are also stored in fragment._seris_ov - since they are 1/16 or smaller than full ERIs, which we usually store anway, this should be managable.

maxnus commented 2 years ago

I moved the code that deals with seris_ov even further down the line, into the FCI solver. This revealed an important question: should the screened integrals or the bare integrals be used to build the effective 1-electron Hamiltonian. Since this effective potential should represent the interactions to the outside of the cluster (rather than within in), I would argue that the bare interactions should be used? On the other hand, if we had the full system screened interactions, one could also argue that they should be used?

edit: I am pretty sure that we want to build h1eff frm unscreened ERIs - thee screening we have is between cluster electrons due to the environment, not between cluster and environment electrons due to the environment, right

ghb24 commented 2 years ago

@maxnus @cjcscott : Re. the building of h1eff - unless I'm missing something, surely this build involves eri's between the cluster and environment (for essentially the mean-field due to the external static electrons), so for the case of a single cluster, we don't even have access to the required screened version of these integrals? Either way, I think that this should be built from the bare Coulomb interaction. @cjcscott?

maxnus commented 2 years ago

@maxnus @cjcscott : Re. the building of h1eff - unless I'm missing something, surely this build involves eri's between the cluster and environment (for essentially the mean-field due to the external static electrons), so for the case of a single cluster, we don't even have access to the required screened version of these integrals? Either way, I think that this should be built from the bare Coulomb interaction. @cjcscott?

Yes, that's my thinking as well. However, in Vayesta, instead of building the core-potential in a "forward" way, buy doing DM_core * U(core|cluster), which would require the core-cluster interactions, we build it "backwards" by removing the within cluster HF potential from the full HF potential as P_cluster V_HF P_cluster - DM_cluster * U(cluster|cluster), which only requires the within cluster-interactions and is thus cheaper. Of course the bare U should be used to build the core potential nonetheless, since the bare U was used to build the global V_HF

cjcscott commented 2 years ago

Yeah, the bare eri's should definitely be used at all times when performing heff construction- the cancellation between the two in the construction wouldn't have any physical meaning. I didn't think about our backwards heff approach in this initial implementation, so it could well have been using the screened eris by accident- good to notice!

I am pretty sure that we want to build h1eff frm unscreened ERIs - thee screening we have is between cluster electrons due to the environment, not between cluster and environment electrons due to the environment, right

Interpretation of the renormalisation as screening via a particular subset of the space isn't this simple sadly; it represents the effects of the whole system behaving collectively at the level of RPA.

cjcscott commented 2 years ago

I moved the code that deals with seris_ov even further down the line, into the FCI solver.

I do think this is becoming too complicated a structure thanks to our solver interfaces; if we just had the solvers accept as inputs the Hamiltonian specifications as arrays (incl cderi if that's essential to efficiency in large clusters) this could all be a whole lot simpler and avoid a lot of pain and duplicated effort. I'll see what I can do on the REBCC interface....

maxnus commented 2 years ago

Interpretation of the renormalisation as screening via a particular subset of the space isn't this simple sadly; it represents the effects of the whole system behaving collectively at the level of RPA.

Can you elaborate? What other effects does the renormalization include, apart from screening due to the frozen electrons?

I do think this is becoming too complicated a structure thanks to our solver interfaces; if we just had the solvers accept as inputs the Hamiltonian specifications as arrays (incl cderi if that's essential to efficiency in large clusters) this could all be a whole lot simpler and avoid a lot of pain and duplicated effort. I'll see what I can do on the REBCC interface....

What do you mean? The structure is now very simple actually, even for CCSD. The problem is that it doesn't work for CCSD (it doesn't give the same results as FCI+screening for 2e/2h problems), but I believe one has to modify the cluster-cluster Fock matrix as well in the case of CCSD..

ghb24 commented 2 years ago

Can you elaborate? What other effects does the renormalization include, apart from screening due to the frozen electrons?

So the approach performs RPA on the full system, and then works out the (static) interactions in the cluster which mean that if you were to do RPA in just the cluster space, then you would reproduce the zeroth and first dd response of the full system projected into the cluster space (you can think of this as essentially a large part of the 2RDM). This can be achieved analytically and efficiently. As the cluster grows to span the whole space, it trivially avoids any double counting, since the bare interactions will be the ones to produce the correct full system / cluster dd moments. However, the cluster interactions in general are included in the definition of the full system dd response which aims to be reproduced in any subsequent cluster RPA calculation with the screened interactions.

I don't think this is double counting things at non-complete bath sizes, though it is a little tricky - you definitely can't consider diagrammatic contributions from different subspaces. This contrasts with the 'normal' approach in DMFT, where you do "constrained RPA", which computes the RPA screened coulomb interaction having explicitly removed the diagrams corresponding to screening by the cluster space, to explicitly avoid double counting of correlated contributions from cluster interactions. I guess you can say this is a little like the DMET vs. DMFT difference, whereby in DMFT, you compute the hybridization/bath space having explicitly removed the cluster (well, fragment in this case) self-energy, while in DMET, you just take a bath space from a decomposition of the DMET mean-field state over the whole system, without removing the correlation potential from the fragment... Does this make any sense?

The problem is that it doesn't work for CCSD (it doesn't give the same results as FCI+screening for 2e/2h problems), but I believe one has to modify the cluster-cluster Fock matrix as well in the case of CCSD..

Yes, this is a sanity-check we would want to pass...there are likely Fock matrix contributions we would need to think about here...

maxnus commented 2 years ago

So the approach performs RPA on the full system, and then works out the (static) interactions in the cluster which mean that if you were to do RPA in just the cluster space, then you would reproduce the zeroth and first dd response of the full system projected into the cluster space (you can think of this as essentially a large part of the 2RDM). This can be achieved analytically and efficiently. As the cluster grows to span the whole space, it trivially avoids any double counting, since the bare interactions will be the ones to produce the correct full system / cluster dd moments. However, the cluster interactions in general are included in the definition of the full system dd response which aims to be reproduced in any subsequent cluster RPA calculation with the screened interactions.

I don't think this is double counting things at non-complete bath sizes, though it is a little tricky - you definitely can't consider diagrammatic contributions from different subspaces. This contrasts with the 'normal' approach in DMFT, where you do "constrained RPA", which computes the RPA screened coulomb interaction having explicitly removed the diagrams corresponding to screening by the cluster space, to explicitly avoid double counting of correlated contributions from cluster interactions. I guess you can say this is a little like the DMET vs. DMFT difference, whereby in DMFT, you compute the hybridization/bath space having explicitly removed the cluster (well, fragment in this case) self-energy, while in DMET, you just take a bath space from a decomposition of the DMET mean-field state over the whole system, without removing the correlation potential from the fragment... Does this make any sense?

OK makes sense. I think for the sake of naming things in Vayesta, I would still use "screened ERIs" / "screening", since this is essentially what we are trying to capture and also a word that most people will be familar with.

ghb24 commented 2 years ago

I thing for the sake of naming things in Vayesta, I would still use "screened ERIs" / "screening", since this is essentially what we are trying to capture and also a word that most people will be familar with.

Agreed