shankar1729 / jdftx

JDFTx: software for joint density functional theory
http://jdftx.org
82 stars 54 forks source link

Questions about the usage of the coulomb-truncation-embed tag #282

Closed miaozhuang68 closed 1 year ago

miaozhuang68 commented 1 year ago

hi dear shankar I have some problems using tag:coulomb-truncation-embed. output.in states that

Coulomb truncation embedding center is not invariant under symmetry matrix:
[ -1 0 0 ]
[ 0 -1 0 ]
[ 0 0 -1 ]
[ 0.001783 -0.498217 0.441034 ]
Coulomb truncation embedding center is not invariant under symmetries.
End date and time: Tue Jun 6 23:34:16 2023 (Duration: 0-0:00:00.54)
Failed.
Found 16 space-group symmetries with basis
Applied RMS atom displacement 8.67218e-16 bohrs to make symmetries exact.

I'm a beginner in JDFTx and I know you've answered several questions about choosing a Coulomb truncation embedding center, but I tried using (0 0 0) (0 0 0.180184) (0 0 0.5) (0 0 0.72903399) , they are the default value, the middle coordinates of the bottom atom and the top atom, the middle coordinate of the whole cell, and the middle coordinate of the top atom from the first layer atom of the previous cell. I don't know what problem I'm having, thank you very much for your answer! My entire input file is as follows: I have re-uploaded this file because the original version contained VASP related files kptest.zip

And in the PBS management system, the load of my task is even higher than the ncpu value, the screenshot is as follows image

shankar1729 commented 1 year ago

The error message points out the symmetry operation not respected by the embedding center: it is an inversion with a shift containing 0.441034 for the third lattice component. This means that half of that i.e. 0.220517 would be invariant under that operation. Therefore, use 0 0 0.220517 as your embedding center.

Your PBS job file seems to be asking for one node with 16 processes, but JDFTx is running with a single process and 20 cores. This is very strange, and likely means there is some issue in your MPI build that is probably causing several independent JDFTx runs to happen in parallel (each taking up 20 cores?). Make sure the mpi module visible during run is identical to during the build. Also, use -c to jdftx to specify number of threads/process explicitly if things still don't launch correct process and thread counst after that.

miaozhuang68 commented 1 year ago

Dear Shankar Thank you so much for your quick reply!

  1. The embedding center you proposed is useful! And I'm wondering if I should always use half the value of the top atom in the surface system? What happens when I add adsorbed surface atoms?
  2. I changed the PBS script according to your instructions. My purpose is to use 9 processes, and each process uses 6 cores. My understanding is that this is suitable for parallel tasks that are integer multiples of reduced kpoints=9. But in practice it is often not fully utilize all 56 cores. Please tell me if my settings are correct.
    
    #PBS -q cluster1
    #PBS -l nodes=1:ppn=56
    #PBS -j oe
    #PBS -l walltime=100:00:00

export PATH=/home2/miaozhuang/software/jdftx-1.7.0/build:/home2/miaozhuang/software/jdftx-1.7.0/jdftx/scripts:$PATH export PATH=/home2/miaozhuang/software/gsl/bin:$PATH export LD_LIBRARY_PATH=/home2/miaozhuang/software/gsl/lib:$LD_LIBRARY_PATH

export LD_LIBRARY_PATH=/usr/lib:$LD_LIBRARY_PATH

cd $PBS_O_WORKDIR NPROC=cat $PBS_NODEFILE |wc -l

mpirun -np 9 -hostfile $PBS_NODEFILE jdftx -c 6 -i input.in -o output.out

and the bad thing is I tried using this command directly without using the task management system, 
`mpirun -n 4 jdftx -i Si.in | tee Simpi.out`
But it still prompts me that there is only 1 process at the beginning of the .out file
```Running on hosts (process indices):  node21 (0)
Divided in process groups (process indices):  0 (0)
Resource initialization completed at t[s]:      0.00
Run totals: 1 processes, 20 threads, 0 GPUs

Best, Jam

shankar1729 commented 1 year ago

In your current case, that was the center of inversion symmetry of your slab, since you had one for the bare slab. If you add an adsorbate, you will break the inversion symmetry anyway, so it won't matter where you precisely put it. Anywhere near the geometric center of the slab (+ adsorbate) should be fine.

I'm not very familiar with PBS (always used SLURM), but that looks correct for 9 processes with 6 threads each, and that should be a good choice for a system with nStates (number of reduced k) = 9.

If you still get 1 process reported, you definitely have an MPI compilation/run issue. Typically, this results from having a different MPI implementation used when you compiled the program and when you run it. I'd recommend recompiling JDFTx and paying close attention to which MPI modules are loaded, and confirming MPI reported by cmake is consistent with what you expected based on the modules. Then, make sure you use the same module environment when running.

Best, Shankar

miaozhuang68 commented 1 year ago

1

miaozhuang68 commented 1 year ago

Dear Shankar, Thanks again for your serious reply!

  1. I have successfully solved the problem of parallelization by seeking the help of the administrator, and now JDFTx can run as multiple processes on my server!
  2. I want to calculate the surface energy of magnesium alloy, but the calculated result is about 6 J/m2, which seems to be too far from the 0.6 J/m2 calculated by VASP. There seems to be something wrong with my input file, please give me some advice! Here is the input file of the Slab model. The only difference between the Bulk input file and it is that KPOINTS has become 13 13 13, Because the volume of Bulk lattice is much smaller than that of Slab. INPOS files are derived from VASP-optimized CONTCAR.
    
    include contcar.lattice
    include contcar.ionpos
    ion-species GBRV/$ID_pbesol.uspp

elec-cutoff 20 100

elec-ex-corr-compare gga-x-optb88-vdw gga-c-pbe kpoint-folding 7 7 1

elec-smearing Gauss 0.1 electronic-SCF

dump-name Vacuum.$VAR dump End State IonicPositions Ecomponents


Best,
Jam
shankar1729 commented 1 year ago

Check the lattice vectors and make sure they are in columns of the 3x3 matrix in the lattice command, rather than in rows as VASP expects. That is the most common error in converting VASP to JDFTx geometries.

Other than that I don't see any obvious issues in your input. If your lattice is fine, test the surface energy with PBE before this less common functional. Also, make sure you used the same functional consistently throughout.

One final thing, your libxc functional has vdw in the name, which suggests it is expecting some nonlocal vdW functional to be added. If so, that is not happening automatically in JDFTx. If you need vdw corrections, use DFT-D2 or DFT-D3: see documentation of command van-der-waals.

Best, Shankar

miaozhuang68 commented 1 year ago

Dear Shankar, My surface energy calculations became correct by replacing with PAW pseudopotentials, thanks for your helpful advice! My next calculation is to calculate the electrochemical potential of the vacuum surface, but I seem to get a picture that fluctuates too much and has no stable potential, image My conversion script is as follows, which is changed according to your script on JDFTx official website:


import numpy as np
import matplotlib.pyplot as plt

S = [ 56, 56, 320]            
L = [ 12.0866874,12.0866874,68.030135583 ]    
z = np.arange(S[2])*L[2]/S[2]  

def readPlanarlyAveraged(fileName, S):
    out = np.fromfile(fileName, dtype=np.float64)
    out = np.reshape(out, S)   
    out = np.mean(out, axis=1) 
    out = np.mean(out, axis=0) 
    return out

dVacuum = readPlanarlyAveraged('Vacuum.d_tot',S)
dShift = dVacuum

print ("VBMshift =", dShift[0]*27.2114)  
plt.plot(z*0.529, dShift*27.2114);     
plt.xlabel('z [Angstroms]')
plt.ylabel('Potential [eV]')
plt.xlim([0, L[2]/2*0.529])            
plt.show()
raw_input()```
miaozhuang68 commented 1 year ago

Dear Shankar, My surface energy calculations became correct by replacing with PAW pseudopotentials, thanks for your helpful advice!

  1. I have added the coulomb-truncation-embed tag, my surface chemical potential problem Seems fluctuating and doesn't have a stable value. image My conversion script is as follows, which is changed according to your script on JDFTx official website:
    
    import numpy as np
    import matplotlib.pyplot as plt

S = [ 56, 56, 320]
L = [ 12.0866874,12.0866874,68.030135583 ]
z = np.arange(S[2])*L[2]/S[2]

def readPlanarlyAveraged(fileName, S): out = np.fromfile(fileName, dtype=np.float64) out = np.reshape(out, S)
out = np.mean(out, axis=1) out = np.mean(out, axis=0) return out

dVacuum = readPlanarlyAveraged('Vacuum.d_tot',S) dShift = dVacuum

print ("VBMshift =", dShift[0]27.2114)
plt.plot(z
0.529, dShift27.2114);
plt.xlabel('z [Angstroms]') plt.ylabel('Potential [eV]') plt.xlim([0, L[2]
0.529])
plt.show() raw_input()


2. I use optb88-vdw here to correct vdw because we used this function in VASP to correct vdw and got good results. According to your instructions I tried adding `van-der-waals D3` in my input file, but the output is as follows:

Initializing DFT-D3 calculator:

DFT-D3 parameterization not available for gga-optb88-vdw functional.

End date and time: Thu Jun 15 15:04:48 2023 (Duration: 0-0:00:00.30) Failed.

shankar1729 commented 1 year ago

PAW pseudopotentials are not supported by JDFTx: make sure you use norm-conserving or ultrasoft ones.

That electrostatic potential profile with the parabolic behavior in the vacuum region, instead of exponential decay, indicates that your slab is charged for some reason. Looking back at your input, it is likely the 0.1 Hartree Gaussian smearing: that very large smearing could be causing free states outside the slab to be occupied. Generally, 0.01 Hartrees is the largest smearing that tends to be sensible for metals. Perhaps, there was a mixup here between eV input in VASP and Hartrees here?

And for the vdw, yes, you cannot use opt-b88-vdw with D3; that is meant to be used with a different type of electronic vdw functional not implemented in JDFTx. If you need vdw to be included, use PBE+D3, or check other functionals supported by D3 in the DFT-D3 paper.

Best, Shankar

miaozhuang68 commented 1 year ago

Dear Shankar, Sorry for the typo, I was using a gauge conservation pseudopotential. According to your instructions, I made the Gauss Smearing smaller, and the problem was solved; and I used the combination of PBE+vdw D3, and the calculation has been very smooth so far. Thank you for your quick and serious reply under this post, now I have some basic concepts about using JDFTx, thank you again! Best, Jam

shankar1729 commented 1 year ago

Great, glad that fixed the issue!