ramess101 / Helium_ab_initio

0 stars 0 forks source link

Tail corrections #3

Closed ramess101 closed 5 years ago

ramess101 commented 6 years ago

I have written a python script that determines the required LJ epsilon and sigma to achieve the actual tail corrections for the full potential:

`

rcut = 14 #[A]

U_tail_rcut = U_tail(rcut)
P_tail_rcut = P_tail(rcut)

print('Cutoff distance: '+str(rcut)+' [A]')

print('Actual tail corrections, U and P:')
print(U_tail_rcut,P_tail_rcut)

eps_LJ = 11. #[K]
sig_LJ = 2.64 #[A]

ULJ_tail = lambda eps,sig,rcut: 4.*eps*(1./9. * sig**12. / rcut**9. - 1./3. * sig**6. / rcut**3.)
PLJ_tail = lambda eps,sig,rcut: -8.*eps*(2./3. * sig**12. / rcut**9. - sig**6. / rcut**3.)

print('Initial guess LJ tail corrections, U and P:')
print(ULJ_tail(eps_LJ,sig_LJ,rcut),PLJ_tail(eps_LJ,sig_LJ,rcut))

Utol = 1e-10
Ptol = 1e-10

for i in range(1000):

    sigma6 = -rcut**3./(8.*eps_LJ)*(P_tail_rcut + 12.*U_tail_rcut)
    sigma12 = sigma6**2 
    sig_LJ = sigma6**(1./6.)
    eps_LJ = U_tail_rcut/(4./9.*(sigma12/rcut**9.) - (4./3.)*(sigma6/rcut**3))
    if i%10 == 0:
        devU = ( ULJ_tail(eps_LJ,sig_LJ,rcut) - U_tail_rcut ) / U_tail_rcut *100.
        devP =  ( PLJ_tail(eps_LJ,sig_LJ,rcut) - P_tail_rcut ) / P_tail_rcut *100.

        if devU < Utol and devP < Ptol: 
            print('Number of iterations required and converged epsilon [K] and sigma [A]')
            print(i,eps_LJ,sig_LJ)
            break

print('Final LJ tail corrections, U and P')
print(ULJ_tail(eps_LJ,sig_LJ,rcut),PLJ_tail(eps_LJ,sig_LJ,rcut))

assert devU < Utol, 'Not precise enough energy tail correction'
assert devP < Ptol, 'Not precise enough pressure tail correction'`

This is the output for rcut=14 Angstrom:

Cutoff distance: 14 [A] Actual tail corrections, U and P: (-1.3594884274740304, 7.7702233663389269) Initial guess LJ tail corrections, U and P: (-1.8095240801733516, 10.856981755313758) Number of iterations required and converged epsilon [K] and sigma [A] (690, 0.0028662102721911643, 10.037020621578602) Final LJ tail corrections, U and P (-1.3594884274740289, 7.7702233663453999)

Notice that epsilon is 0.0029 and sigma is 10.

Here it is for rcut = 18:

Cutoff distance: 18 [A] Actual tail corrections, U and P: (-0.7054775255750203, 3.8236498317381247) Initial guess LJ tail corrections, U and P: (-0.8514046675265038, 5.108411055694046) Number of iterations required and converged epsilon [K] and sigma [A] (370, 0.00037622243257559964, 14.421128645581341) Final LJ tail corrections, U and P (-0.70547752557501986, 3.8236498317398473)

Epsilon is now 0.00038 and sigma is 14.4. This is interesting but also as expected since the ab initio potential converges to zero more quickly than the LJ.

ramess101 commented 6 years ago

Some options:

1) Call this python function whenever starting a job. This seems unnecessary, since it takes a few seconds to compute the long range value in python but this value doesn't change for a given rcut.

2) Store a few values of rcut and use a simple if statement in our script. This seems easiest.

3) Include in the fortran code a section to solve for tail corrections, potentially including an analytical integration (almost for certain more work than needed)

4) Develop some empirical relationship between the tail corrections with respect to rcut. Could also try this for epsilon and sigma.

I think option 2 makes the most sense, considering the degree of accuracy we are looking for.

ramess101 commented 6 years ago

There appears to be a trend with respect to cutoff distance:

Cutoff distance: 10 [A] Actual tail corrections, U and P: (-3.5447882135811661, 20.999921907378521) Initial guess LJ tail corrections, U and P: (-4.964848148587005, 29.785726807676877) Number of iterations required and converged epsilon [K] and sigma [A] (2240, 0.071901779536520355, 5.7839926700950608) Final LJ tail corrections, U and P (-3.5447882135811639, 20.999921907397173)

Cutoff distance: 12 [A] Actual tail corrections, U and P: (-2.0916496307389787, 12.196410170589445) Initial guess LJ tail corrections, U and P: (-2.8733916879137738, 17.23969853310928) Number of iterations required and converged epsilon [K] and sigma [A] (1100, 0.011357380012727023, 7.9124906616017823) Final LJ tail corrections, U and P (-2.0916496307389778, 12.19641017059905)

ramess101 commented 6 years ago

This shows that option 4 is pretty feasible.

It would be pretty easy to fit some polynomial to the energy and pressure tail corrections:

image

image

But I would need to make sure all my units are correct, etc. This might be a long-term plan.

In the short term, it shouldn't be hard to fit a model to sigma and epsilon.

Sigma has a nearly linear dependence on the cutoff:

image

Epsilon is clearly not linear:

image

But if we plot epsilon vs U/epsilon (i.e. this depends on sigma^6 and rcut^-3) we get something that is linear in log log:

image

ramess101 commented 6 years ago

At 7 K with a 14 A cut-off, the tail corrections look reasonable (this is after the NVT equilibration):

image

I also verified that this is identical to the 7 K run (with same random number seed) after the NVT equilibration, because the tail corrections always cancel in the acceptance criteria for NVT moves.

image

ramess101 commented 6 years ago

These computations confirm that the energy corrections between my code and Cassandra are consistent:

image

Compare with values in previous comment

ramess101 commented 6 years ago

I also just verified for 2800 molecules at T=7 and T=8 (different volumes) that the equations are correct, as I get the same energies as Cassandra.

I am not quite sure why you need to square the number of molecules. I don't think tail corrections are required for each molecule, so there is really no double loop over sites, correct? So where does the N^2 come from?

This is in Cassandra (the double do loop should just be if there are different types of atoms):

image

OK, I guess nint_beads is multipled twice, once for each do loop. So that probably explains it. Although I don't see why that is needed theoretically based on Rowley's equations:

image

ramess101 commented 6 years ago

I don't know how to validate the pressure corrections, other than going over the math one more time.

image

def P_tail(rc,plot=False): r = np.linspace(rc,10000,500000) dr = r[1]-r[0] dU = dV_total_hat(r) Pintegrand = dU*r**3*dr Pint = Pintegrand.sum() PLJ_tail = lambda eps,sig,rcut: -8.*eps*(2./3. * sig**12. / rcut**9. - sig**6. / rcut**3.)

image

So yes, the -8 arises because of the ratio between the original 2/3 outside the integral and the final 16/3 outside the integral. The negative sign is because Rowley and Maginn have different orders of r^-9 and r^-3. I originally had this wrong with an 8 instead of -8 and my answers were clearly wrong.

ramess101 commented 6 years ago

Actually, I can verify the tail corrections in pressure by taking the difference between the NVT simulations that are identical except for tail corrections. I only need a single configuration too since the volume and temperature are not changing.

Without tail corrections:

Liquid:

image

Vapor:

image

With tail corrections:

Liquid:

image

Vapor:

image

ramess101 commented 6 years ago

So the difference in pressures are:

Liquid:

(0.23591915 - 0.23639137) x 10^4 = -4.7222 bar

Vapor:

0.1060307 - 0.10605659 = -2.589e-5 bar

The ratio between these two is approximately 182400. This corresponds to this ratio:

(Nliq/Nvap)*2.(Vvap/Vliq)**2. 182423.97333015918

(Nliq/Nvap)**2 is from the double do loop with the nint_beads. The Vvap/Vliq ratio was what we had for energy, but pressure has an extra V (or rho).

The equations above were in terms of Z, but Z = P/rhoRT so when you multiply rho to the other side you get a rho^2.

Maginn's code is for W, which is like the virial. So you must have to include the additional volume somewhere else.

ramess101 commented 6 years ago

I believe I have successfully validated my use of LJ parameters for tail corrections in Cassandra.