Closed ChemChuan closed 12 months ago
Thanks for your interest in using the block2 package.
Thank you very much for your reply
H48_2.0A_cc-pVDZ_uhf_frag_uno_asrot2gvb24_s_CASCI_500.zip H64_STO-6G_2.0A_88_uhf_uno_asrot2gvb32_s_CASCI_1800.zip
Thanks for providing the script and output files.
dmrgscf
interface provided by pyscf
, which implicitly invokes block2
or block1.5
, while your zip files only contain the output from pyscf
, the actual DMRG output is missing. You can find the actual DMRG output under the scratch file created by pyscf
. In the pyscf
output file, you can find lines like scratchDirectory = /scratch/xcren/pyscf/165842
and outputFile = 165842/dmrg.out
, etc. Please attach the dmrg.out
file in these directories (for each case) so that we can have a look at the DMRG output.block2
and block1.5
results are inaccurate, since they are far from convergence. But we still need to see dmrg.out
to confirm this.block2
without using the dmrgscf
interface, and follow https://block2.readthedocs.io/en/latest/tutorial/energy-extrapolation.html to get an estimate of the error for your bond dimension and then you can know how reliable the DMRG result is and whether you need to use larger bond dimension.dmrgscf
interface, you need to follow the block2
documentation https://block2.readthedocs.io/en/latest/user/dmrg-scf.html to set the CASCI
part of your script. For example, when you are running the calculation in only one node, there is no need to use mpirun -n ...
. Instead, you can simply set the number of threads equal to the number of cores in your node, so that the shared-memory parallelization can be used.Also in your input python script, we can see
mc.max_memory = 602400 # MB
mc.fcisolver = dmrgscf.DMRGCI(mol, maxM=1800)
mc.fcisolver.memory = 10 # GB
From the documentation https://block2.readthedocs.io/en/latest/user/dmrg-scf.html#dmrgscf-serial it should be clear that the memory for DMRG is set via mc.fcisolver.memory
, which is only 10 GB. For the same reason, it is likely that the 4TB disk is not used by block2
. The mc.max_memory
is the attribute defined and used in the pyscf
package.
Thank you very much for your reply
For 8 x 8 or larger 2D lattice , this extrapolation still be a reliable method? ... This may require larger bond dimension for extrapolation. Is my understanding correct?
You can read the following paper to get a better understanding of the extrapolation approach:
Olivares-Amaya, R.; Hu, W.; Nakatani, N.; Sharma, S.; Yang, J.; Chan, G. K.-L. The ab-initio density matrix renormalization group in practice. The Journal of Chemical Physics 2015, 142, 034102. doi: 10.1063/1.4905329
Now free -h ---> show used 246GB memory, this memory proportion is constantly changing and will become larger after calculation.
To get the best efficiency it is important to read and follow the block2
documentation. Please cancel this calculation, delete the scratch files, and then restart the calculation without mpirun
(when you are using just one node, we do not need any MPI parallelization), following the script given in https://block2.readthedocs.io/en/latest/user/dmrg-scf.html#dmrgscf-serial. In particular, for your case set
dmrgscf.settings.MPIPREFIX = ''
mc.fcisolver.threads = 32
mc.fcisolver.memory = 100 # mem in GB
Then the memory and disk cost will greatly decrease. The computational speed will also increase.
For example, I can use such as 250, 500, 750, 1000, 1250, 1500 to extrapolate the limit
For energy extrapolation you need to do the reverse schedule and the smallest bond dimension should not be too small. Please read the above paper and the documentation https://block2.readthedocs.io/en/latest/tutorial/energy-extrapolation.html#The-Reverse-Schedule carefully.
Thank you very much for your reply and suggestions. I will carefully read the above paper and documents. I understand what you mean. I am indeed calculating on a node, and do not need MPI parallelization. I will restart this task.
Hi, I read the paper and have a question. For arenes, how to obtain the split-localized orbitals (do RHF ---> PM localization ?) and fully-localized orbitals ? Because I want to test this system using pyscf and block2.
Olivares-Amaya, R.; Hu, W.; Nakatani, N.; Sharma, S.; Yang, J.; Chan, G. K.-L. The ab-initio density matrix renormalization group in practice. The Journal of Chemical Physics 2015, 142, 034102. doi: 10.1063/1.4905329
The orbital localization can be done using pyscf
. Please have a look at the pyscf
documentation https://pyscf.org/user/lo.html. Example scripts can be found in pyscf issues, such as https://github.com/pyscf/pyscf/issues/1892. If you have further questions regarding the usage of pyscf
, you may search and/or post issues in the pyscf repo.
Yes, I know that pyscf has these different localization functions.
Is the split-localized orbitals in the paper only obtained through PM localization using RHF?
RHF orbials ---> PM localization ---> split-localized orbitals, Is it such a process?
Split-localization using PM simply means you do PM localization for occupied orbitals and then PM localization for virtual orbitals and then combine the two sets of localized orbitals, as shown in the script in https://github.com/pyscf/pyscf/issues/1892 that I mentioned previously.
I hope this message finds you well. I have been utilizing Block2-MPI for my research in DMRG calculations and have encountered some challenges during the process. I would greatly appreciate your assistance in addressing the following questions:
I appreciate your time and assistance tremendously. I look forward to receiving your valuable insights and suggestions to enhance the efficiency of my Block2 calculations.