Closed naveen-dandu closed 1 year ago
Hi, I am glad to help you with the problems. But you need to provide some contexts so that I can get enough information to analyze the problem. For example:
(1) how many orbitals do you have in your system?
(2) How many of orbitals are in the active space?
(3) What is your operating system?
(4) how did you install block2
?
(5) Have you tried any small systems or the examples in the documentation?
In addition, it will be easier for me to see the problem if you can provide the input and output files or scripts.
Hi Huanchen,
Thank you for your quick reply. Please find attached tar file with my recent try. I have installed block2 using pip. It is run on linux OS.
Thank you, Naveen
From: Huanchen Zhai @.> Date: Monday, November 14, 2022 at 7:44 PM To: block-hczhai/block2-preview @.> Cc: Dandu, Naveen @.>, Author @.> Subject: Re: [block-hczhai/block2-preview] not finding shared MKL object (Issue #24)
Hi, I am glad to help you with the problems. But you need to provide some contexts so that I can get enough information to analyze the problem. For example:
(1) how many orbitals do you have in your system? (2) How many of orbitals are in the active space? (3) What is your operating system? (4) how did you install block2? (5) Have you tried any small systems or the examples in the documentation?
In addition, it will be easier for me to see the problem if you can provide the input and output files or scripts.
— Reply to this email directly, view it on GitHubhttps://github.com/block-hczhai/block2-preview/issues/24#issuecomment-1314642767, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ALQ7DIAQBQJT5ZYR7R443K3WILTI7ANCNFSM6AAAAAASAM2Z24. You are receiving this because you authored the thread.Message ID: @.***>
I cannot find any attachment. Probably adding the attachment via email does not work. You can try adding the files through the github issue webpage. You may drag your files into the text box.
DMRGSCF-TEST.zip Please see attached
Thanks for providing the detailed information.
You can fix the problem by setting a proper scratch folder. You can do this by adding a line at line 3 in your test.py
:
lib.param.TMPDIR = '/scratch/..../<username>/<jobid>/...'
Note that you have to look up the specific folder name for the purpose of "scratch space", from the documentation of your supercomputer. Make sure the scratch folder exists before you run the calculation.
Currently, from the output file NPT_100K
I can see that the scratch folder starts with /tmp/
which is not okay. The /tmp
is not intended for writing large amount of scratch files and there is not enough space under /tmp
. You have to use a folder name not starting with /tmp
and make sure there is enough space (around 1TB for a medium-size calculation).
For the MKL error, this is often because you are running at least two jobs using the same scratch folder. When you submitted more than one jobs (for testing, etc.), you have to ensure that each job has a different scratch folder. For example, one job writes into /scratch/user/0001
and the second job writes into /scratch/user/0002
so that they will not influence each other.
Thanks for your suggestion, it worked. However, now I ran into a different issue. Please see attached.
Thank you, Naveen DMRGSCF-TEST.zip
Hi, the problem is that you do not have enough memory for this calculation. It is not related to the number you set in the input file. It is related to the available amount of memory in the node. I noticed that you started 9 processors each with 4 threads in one node. This will consume more memory. You may try 1 processor with 36 threads instead, which will save the total amount of memory required in one node and maybe it will also save you some computational time.
With 1 processor if the memory is still not enough, you may decrease the max bond dimension, or use the node with larger amount of memory, or run the job with two or more nodes.
Thank you so much, it is running with 1 processor and 36 threads without issue. I will get back to you once the job completes.
Hi Huanchen,
I am running into an issue of convergence. I am trying to reproduce the results from this paper "International Journal of Quantum Chemistry 2015, 115, 283–299", but I am unable to get the convergence even for the smallest molecule (poly(m-phenylenecarbene)s) for n=1.
Can you please give me any suggestions?
Thank you, Naveen
Hi Naveen,
According to the paper, the smallest molecule (poly(m-phenylenecarbene)s) for n=1 has an active space (14o, 14e), which can be easily handled using FCI as the active space solver. Do you have any problems using the FCI solver (without DMRG) with CASSCF for this system?
Hi Huanchen, Yes, I was using solvers. This is my input file. Please let me know if I have to make any changes,
from pyscf import gto, scf, lib, dmrgscf, mcscf import os
lib.param.TMPDIR = '/lcrc/globalscratch/ndandu/gpu3/' dmrgscf.settings.BLOCKEXE = os.popen("which block2main").read().strip() dmrgscf.settings.MPIPREFIX = 'mpirun -n 1 --bind-to none'
mol = gto.M(atom=''' C -0.000001000000 1.298599000000 0.000015000000; C -1.229323000000 0.573451000000 -0.033803000000; C -1.396884000000 -0.650107000000 -0.733386000000; C -2.370042000000 1.141507000000 0.586611000000; C -2.640975000000 -1.256651000000 -0.814972000000; H -0.544012000000 -1.081947000000 -1.244208000000; C -3.588123000000 0.486403000000 0.573019000000; H -2.250672000000 2.095161000000 1.087479000000; C -3.731888000000 -0.704922000000 -0.143972000000; H -2.763300000000 -2.171669000000 -1.384270000000; H -4.442811000000 0.915758000000 1.083610000000; H -4.699331000000 -1.192317000000 -0.193276000000; C 1.229282000000 0.573493000000 0.034090000000; C 1.397020000000 -0.650003000000 0.733691000000; C 2.369897000000 1.141528000000 -0.586643000000; C 2.641138000000 -1.256517000000 0.815047000000; H 0.544253000000 -1.081869000000 1.244675000000; C 3.587977000000 0.486381000000 -0.573343000000; H 2.250563000000 2.095219000000 -1.087418000000; C 3.731909000000 -0.704883000000 0.143704000000; H 2.763553000000 -2.171432000000 1.384495000000; H 4.442499000000 0.915734000000 -1.084236000000; H 4.699338000000 -1.192307000000 0.192796000000 ''', basis='ccpvdz', verbose=4, max_memory=270000) # mem in MB mf = scf.RHF(mol) mf.kernel()
from pyscf.mcscf import avas nactorb, nactelec, coeff = avas.avas(mf, ["C 2p", "C 3p", "C 2s", "C 3s"]) print('CAS = ', nactorb, nactelec)
lib.param.TMPDIR = os.path.abspath(lib.param.TMPDIR)
solvers = [dmrgscf.DMRGCI(mol, maxM=100, tol=1E-4) for _ in range(2)] weights = [1.0 / len(solvers)] * len(solvers)
solvers[0].spin = 0 solvers[1].spin = 2
for i, mcf in enumerate(solvers): mcf.runtimeDir = lib.param.TMPDIR + "/%d" % i mcf.scratchDirectory = lib.param.TMPDIR + "/%d" % i mcf.threads = 256 mcf.memory = int(mol.max_memory / 3000) # mem in GB
mc = mcscf.CASSCF(mf, nactorb, nactelec) mcscf.state_averagemix(mc, solvers, weights)
mc.canonicalization = True mc.natorb = True mc.kernel(coeff)
Thank you, Naveen
Thanks for providing the input file. It seems that in this input some information is not matching the ones in the paper: (a) the paper uses the c2v geometry; (2) the paper used a much smaller active space CAS(16, 16); (3) the paper used the 6-31g basis set. The following is an example script with these problems fixed:
from pyscf import gto, scf, dmrgscf, lib, mcscf
import numpy as np
import os
dmrgscf.settings.BLOCKEXE = os.popen("which block2main").read().strip()
dmrgscf.settings.MPIPREFIX = ''
lib.param.TMPDIR = os.path.abspath(lib.param.TMPDIR)
mol = gto.M(atom="""
C 0.00000000 0.00000000 0.16532160
C -1.40341205 -0.00002432 0.17348537
C -2.15145391 1.22435369 0.18842388
C -3.55018653 1.21393704 0.18421265
C -4.26282833 -0.00007388 0.17238033
C -3.55014445 -1.21406010 0.18421265
C -2.15141147 -1.22442826 0.18842388
C 1.40341205 0.00002432 0.17348537
C 2.15145391 -1.22435369 0.18842388
C 3.55018653 -1.21393704 0.18421265
C 4.26282833 0.00007388 0.17238033
C 3.55014445 1.21406010 0.18421265
C 2.15141147 1.22442826 0.18842388
H -1.60837670 2.16431556 0.18509150
H -4.09514536 2.15478599 0.18285809
H -5.35309043 -0.00009278 0.16755042
H -4.09507067 -2.15492794 0.18285809
H -1.60830168 -2.16437131 0.18509150
H 1.60837670 -2.16431556 0.18509150
H 4.09514536 -2.15478599 0.18285809
H 5.35309043 0.00009278 0.16755042
H 4.09507067 2.15492794 0.18285809
H 1.60830168 2.16437131 0.18509150
""", basis='6-31g', spin=2, symmetry='c1', verbose=5, max_memory=10000)
mf = scf.RHF(mol)
mf.kernel()
solvers = [dmrgscf.DMRGCI(mol, maxM=750, tol=1E-8) for _ in range(2)]
weights = [1.0 / len(solvers)] * len(solvers)
solvers[0].spin = 0
solvers[1].spin = 2
for i, mcf in enumerate(solvers):
mcf.runtimeDir = lib.param.TMPDIR + "/%d" % i
mcf.scratchDirectory = lib.param.TMPDIR + "/%d" % i
mcf.threads = 28
mcf.memory = int(mol.max_memory / 1000) # mem in GB
mc = mcscf.CASSCF(mf, 14, 14)
mcscf.state_average_mix_(mc, solvers, weights)
mc.canonicalization = True
mc.natorb = True
mc.kernel()
In the above script, the HF orbitals are used as the initial guess for the active space, which may not be ideal. If this does not generate a good number, you may need to look at the shape of HF/DFT orbitals, and then only to include the $\pi$ orbitals in the active space, as described in the paper. I am not an expert on defining the active space for these systems, but you can do some tests on selecting the orbitals.
Hi Huanchen,
Thanks for your help on my last issue. It worked great for the example. I am trying to run similar calculations on the molecule of interest. I am getting the following error:
1-step CASSCF not converged, 50 macro (982 JK 194 micro) steps CASSCF canonicalization FCI vector not available, call CASCI to update wavefunction
Can you please let me know what tags will help in increasing the number of macro-iterations? I didn't find any solution in github/code for this particular issue.
My current input tags for casscf are: solvers = [dmrgscf.DMRGCI(mol, maxM=750, tol=1E-8) for _ in range(2)] weights = [1.0 / len(solvers)] * len(solvers)
solvers[0].spin = 0 solvers[1].spin = 2
for i, mcf in enumerate(solvers): mcf.runtimeDir = lib.param.TMPDIR + "/%d" % i mcf.scratchDirectory = lib.param.TMPDIR + "/%d" % i mcf.threads = 28 mcf.memory = int(mol.max_memory / 1000) # mem in GB
mc = mcscf.CASSCF(mf, 10, 10) mcscf.state_averagemix(mc, solvers, weights) mc.fcisolver.conv_tol = 1e-5 mc.canonicalization = True mc.natorb = True mc.kernel()
Hi, you can find the answer in the pyscf
documentation: https://pyscf.org/user/mcscf.html#optimization-settings
I am trying to run dmrgscf to get CASSCF with DMRG as the active space solver for 9 heavy atoms molecule. While running the code, I am facing several issues. One of which is related to "not finding shared MKL objects" and other related to "running out of scratch space on the /tmp folder". Can you please help me on how to solve this issue. Thank you in advance.