Closed hungpham2017 closed 5 years ago
Hi @hungpham2017
Be in STO-3G has 4 orbitals, which is a rather small test case, better to just use FCI then and no DMRG.
Once DMRG gets stuck in a local minimum, it's hard to get out. Normally, the test case is big. In that case:
In your case Be in STO-3G is so small (4 orbitals), that it is very likely to get stuck in the wrong state and no noise is added as the virtual dimension for representing the FCI solution would be (not taking SU(2) into consideration) 16.
Given the above explanation, please test with a little bit larger system, for example O (oxygen) and 6-31G:
import numpy as np
from functools import reduce
from pyscf import gto, scf, fci, ao2mo
mol = gto.M(atom={'O 0 0 0'}, basis='6-31G')
m = scf.RHF(mol).run()
norb = m.mo_coeff.shape[1]
nelec = mol.nelec
h1e = reduce(np.dot, (m.mo_coeff.T, m.get_hcore(), m.mo_coeff))
g2e = ao2mo.incore.general(m._eri, (m.mo_coeff,)*4, compact=False).reshape(norb,norb,norb,norb)
fs = fci.addons.fix_spin_(fci.FCI(mol, m.mo_coeff), .5)
fs.nroots = 16
e, fcivec = fs.kernel(verbose=0)
import PyCheMPS2
import ctypes, os, sys
Initializer = PyCheMPS2.PyInitialize()
Initializer.Init()
Group = 0
orbirreps = np.zeros([norb], dtype=ctypes.c_int)
HamCheMPS2 = PyCheMPS2.PyHamiltonian(norb, Group, orbirreps)
#Feed the 1e and 2e integral (T and V)
for orb1 in range(norb):
for orb2 in range(norb):
HamCheMPS2.setTmat(orb1, orb2, h1e[orb1, orb2])
for orb3 in range(norb):
for orb4 in range(norb):
HamCheMPS2.setVmat(orb1, orb2, orb3, orb4, g2e[orb1, orb3, orb2, orb4]) #From chemist to physics notation
TwoS = mol.spin
Nel_up = (mol.nelectron+ TwoS) // 2
Nel_down = mol.nelectron - Nel_up
Irrep = 0
maxMemWorkMB = m.max_memory
Prob = PyCheMPS2.PyProblem(HamCheMPS2, TwoS, mol.nelectron, Irrep)
OptScheme = PyCheMPS2.PyConvergenceScheme(4)
OptScheme.setInstruction(0, 200, 1e-4, 5, 0.03)
OptScheme.setInstruction(1, 500, 1e-5, 5, 0.03)
OptScheme.setInstruction(2, 1000, 1e-6, 5, 0.001)
OptScheme.setInstruction(3, 1000, 1e-8, 100, 0.00)
theDMRG = PyCheMPS2.PyDMRG(Prob, OptScheme)
EDMRG0 = theDMRG.Solve()
EDMRG = []
theDMRG.activateExcitations(15)
for i in range(15):
theDMRG.newExcitation(20.0)
EDMRG.append(theDMRG.Solve())
print("PySCF : Root0: %15.8f, Root1: %15.8f, Root2: %15.8f, Root3: %15.8f, Root4: %15.8f, Root5: %15.8f, Root6: %15.8f, Root7: %15.8f, Root8: %15.8f, Root9: %15.8f, Root10: %15.8f, Root11: %15.8f, Root12: %15.8f, Root13: %15.8f, Root14: %15.8f, Root15: %15.8f" % (e[0],e[1],e[2],e[3],e[4],e[5],e[6],e[7],e[8],e[9],e[10],e[11],e[12],e[13],e[14],e[15]))
print("CheMPS2: Root0: %15.8f, Root1: %15.8f, Root2: %15.8f, Root3: %15.8f, Root4: %15.8f, Root5: %15.8f, Root6: %15.8f, Root7: %15.8f, Root8: %15.8f, Root9: %15.8f, Root10: %15.8f, Root11: %15.8f, Root12: %15.8f, Root13: %15.8f, Root14: %15.8f, Root15: %15.8f" % (EDMRG0, EDMRG[0], EDMRG[1], EDMRG[2], EDMRG[3], EDMRG[4], EDMRG[5], EDMRG[6], EDMRG[7], EDMRG[8], EDMRG[9], EDMRG[10], EDMRG[11], EDMRG[12], EDMRG[13], EDMRG[14]))
The results are given here:
Run 1
PySCF : Root0: -74.75708817, Root1: -74.75708817, Root2: -74.75708817, Root3: -74.75708817, Root4: -74.75708817, Root5: -74.69661726, Root6: -73.91627244, Root7: -73.91627239, Root8: -73.91627239, Root9: -73.77153553, Root10: -73.77134495, Root11: -73.76997568, Root12: -73.64050889, Root13: -73.63932580, Root14: -73.63835551, Root15: -73.63829142
CheMPS2: Root0: -74.75708817, Root1: -74.75708817, Root2: -74.75708817, Root3: -74.75708817, Root4: -74.75708817, Root5: -74.69661726, Root6: -73.91627281, Root7: -73.91627281, Root8: -73.91627281, Root9: -73.77134991, Root10: -73.77134991, Root11: -73.77134991, Root12: -73.64080297, Root13: -73.64080297, Root14: -73.64080297, Root15: -73.64080297
Run 2
PySCF : Root0: -78.84326124, Root1: -74.75708817, Root2: -74.75708817, Root3: -74.75708817, Root4: -74.75708817, Root5: -74.75708817, Root6: -74.69661726, Root7: -73.91627281, Root8: -73.91627281, Root9: -73.91627278, Root10: -73.77611243, Root11: -73.77133784, Root12: -73.77125551, Root13: -73.64057296, Root14: -73.64043002, Root15: -73.64035129
CheMPS2: Root0: -74.75708817, Root1: -74.75708817, Root2: -74.75708817, Root3: -74.75708817, Root4: -74.75708817, Root5: -74.69661726, Root6: -73.91627281, Root7: -73.91627281, Root8: -73.91627281, Root9: -73.77134991, Root10: -73.77134991, Root11: -73.77134991, Root12: -73.64080297, Root13: -73.64080297, Root14: -73.64080297, Root15: -73.64080297
Run 3
PySCF : Root0: -74.75709041, Root1: -74.75708820, Root2: -74.75708817, Root3: -74.75708817, Root4: -74.75708816, Root5: -74.69661726, Root6: -73.91627299, Root7: -73.91627283, Root8: -73.91627255, Root9: -73.77134943, Root10: -73.77110371, Root11: -73.77094994, Root12: -73.64067206, Root13: -73.64029826, Root14: -73.63934603, Root15: -73.63610232
CheMPS2: Root0: -74.75708817, Root1: -74.75708817, Root2: -74.75708817, Root3: -74.75708817, Root4: -74.75708817, Root5: -74.69661726, Root6: -73.91627281, Root7: -73.91627281, Root8: -73.91627281, Root9: -73.77134991, Root10: -73.77134991, Root11: -73.77134991, Root12: -73.64080297, Root13: -73.64080297, Root14: -73.64080297, Root15: -73.64080297
As you can observe, it is now PySCF which gives varying and wrong solutions in different runs, while CheMPS2 is consistent over the consecutive runs.
@sunqm: Any idea?
Best regards, Sebastian
P.S.: The magic number "20.0" in the script above should be about 10 times the difference between the largest energy and the smallest energy you target. Better to make the number too large than too small. In chemistry, it makes sense to set it at fabs(energy_ground_state).
The fixspin function adds a penalty energy to the states of wrong spin. This function can lead to noise in the answer since it changes the quadratic region of the Hamiltonian. I tested the system on my desktop. With fixspin, small fluctuations are always found for the degenerated states. Changing the level shift value or removing this function is helpful to converge to the right results.
Another possible issue is the race condition bug of FCI code in the old versions of pyscf (<1.5.4), if your local version is not updated.
Thank you @SebWouters for the comprehensive explanation and your thesis. I am trying to run your example and just simply increasing the number of processors: export OMP_NUM_THREADS = 4. The calculation is taking long and seems frozen the step below for more than 10 hours. Is this correct way to take advantage of omp parallelization in PyCheMPS2?
CheMPS2: a spin-adapted implementation of DMRG for ab initio quantum chemistry
Copyright (C) 2013-2018 Sebastian Wouters
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
Stats: nIt(DAVIDSON) = 14
Energy at sites (7, 8) is -48.4737074036521
Stats: nIt(DAVIDSON) = 23
Energy at sites (6, 7) is -57.2275246628521
Stats: nIt(DAVIDSON) = 63
Energy at sites (5, 6) is -66.3927895371372
@sunqm I am running the oxygen example with my local version (1.5.4) to see if the fluctuation still occur with FCI. One follow up question is the race condition bug of FCI significantly affects ground state and for multiples root calculations using the CASSCF solver?
@hungpham2017 The bug affects all states
It is maybe off topic a bit since it is more about FCI solver in PySCF, but maybe @SebWouters is also interested. @sunqm : here are a few tests. You're right that without using fix_spin, there is no fuctuation and some states have wrong spin as expected. fixspin using the level shift of 1.0 gave more consitent results but give the ground state -78.4276810400. also in @SebWouters calculation.
fixspin using the level shift of 0.5 gave fuctuated results. So the question here is how to use the fix_spin function efficiently? what would be the optimal value of the level shift? Thank you very much!
Without using fixspin function.
# Run 1 Run 2 Run 4 Run 4 2S + 1 (Run 5)
1 -74.7570881700 -74.7570881700 -74.7570881700 -74.7570881700 1.0000000000
2 -74.7570881700 -74.7570881700 -74.7570881700 -74.7570881700 1.0000000000
3 -74.7570881700 -74.7570881700 -74.7570881700 -74.7570881700 1.0000000000
4 -74.7570881700 -74.7570881700 -74.7570881700 -74.7570881700 1.0000000000
5 -74.7570881700 -74.7570881700 -74.7570881700 -74.7570881700 1.0000000000
6 -74.6966172600 -74.6966172600 -74.6966172600 -74.6966172600 1.0000000000
7 -73.9162728100 -73.9162728100 -73.9162728100 -73.9162728100 1.0000000000
8 -73.9162728100 -73.9162728100 -73.9162728100 -73.9162728100 1.0000000000
9 -73.9162728100 -73.9162728100 -73.9162728100 -73.9162728100 1.0000000100
10 -73.8568848900 -73.8568848900 -73.8568848900 -73.8568848900 5.0000000000
11 -73.8568848900 -73.8568848900 -73.8568848900 -73.8568848900 5.0000000000
12 -73.8568848900 -73.8568848900 -73.8568848900 -73.8568848900 5.0000000000
13 -73.7713499100 -73.7713499100 -73.7713499100 -73.8001318200 1.0000000000
14 -73.7713499100 -73.7713499100 -73.7713499100 -73.7713499100 1.0000000000
15 -73.7713499100 -73.7713499100 -73.7713499100 -73.7713499100 1.0000000000
16 -73.6408029700 -73.6408029700 -73.6408029700 -73.7713499100 1.0000000000
fixspin with Eshift = 0.5
# Run 1 Run 2 Run 4 Run 4 2S + 1 (Run 5)
1 -74.7570881700 -74.7570921300 -74.7570881900 -74.7570881900 1.0000000000
2 -74.7570881700 -74.7570881700 -74.7570881700 -74.7570881700 1.0000000000
3 -74.7570881700 -74.7570881700 -74.7570881700 -74.7570881700 1.0000000000
4 -74.7570881700 -74.7570880600 -74.7570881700 -74.7570881700 1.0000000000
5 -74.7570876800 -74.7570870400 -74.7570881600 -74.7570881600 1.0000000000
6 -74.6966172600 -74.6966172600 -74.6966172600 -74.6966172600 1.0000000000
7 -73.9162715800 -73.9162728100 -73.9162727500 -73.9162727500 1.0000000000
8 -73.9162715800 -73.9162728100 -73.9162727500 -73.9162727500 1.0000000000
9 -73.9162713200 -73.9162728100 -73.9162727100 -73.9162727100 1.0000000000
10 -73.7709858400 -73.7744947700 -73.7713534500 -73.7713534500 1.0000000600
11 -73.7703598500 -73.7721523000 -73.7711208700 -73.7711208700 1.0000001000
12 -73.7701488500 -73.7711647800 -73.7680227400 -73.7680227400 1.0000000800
13 -73.6393650500 -73.6593311300 -73.6411810200 -73.6411810200 1.0000000500
14 -73.6387661600 -73.6403737200 -73.6401056900 -73.6401056900 1.0000001000
15 -73.6379877700 -73.6394735300 -73.6380977400 -73.6380977400 1.0000058400
16 -73.6375239500 -73.5721729600 -73.6350080200 -73.6350080200 1.0000006900
fixspin with Eshift = 1.0
# Run 1 Run 2 Run 4 Run 4 2S + 1 (Run 5)
1 -78.4276810400 -78.4276810400 -78.4276810400 -78.4276810400 1.00000000
2 -74.7570883000 -74.7570883000 -74.7570883000 -74.7570883000 1.00000000
3 -74.7570881700 -74.7570881700 -74.7570881700 -74.7570881700 1.00000000
4 -74.7570881700 -74.7570881700 -74.7570881700 -74.7570881700 1.00000000
5 -74.7570881700 -74.7570881700 -74.7570881700 -74.7570881700 1.00000000
6 -74.7570881700 -74.7570881700 -74.7570881700 -74.7570881700 1.00000000
7 -74.6966172600 -74.6966172600 -74.6966172600 -74.6966172600 1.00000063
8 -73.9162704000 -73.9162704000 -73.9162704000 -73.9162704000 1.00000063
9 -73.9162704000 -73.9162704000 -73.9162704000 -73.9162704000 1.00000096
10 -73.9162659800 -73.9162659800 -73.9162659800 -73.9162659800 1.00000573
11 -73.7712955500 -73.7712955500 -73.7712955500 -73.7712955500 1.00000826
12 -73.7712075600 -73.7712075600 -73.7712075600 -73.7712075600 1.00108825
13 -73.7377514600 -73.7377514600 -73.7377514600 -73.7377514600 1.00016501
14 -73.6386089300 -73.6386089300 -73.6386089300 -73.6386089300 1.00004948
15 -73.6376151000 -73.6376151000 -73.6376151000 -73.6376151000 1.00046548
16 -73.6341031500 -73.6341031500 -73.6341031500 -73.6341031500 1.00040077
@hungpham2017
I am trying to run your example and just simply increasing the number of processors: export OMP_NUM_THREADS = 4. The calculation is taking long and seems frozen the step below for more than 10 hours. Is this correct way to take advantage of omp parallelization in PyCheMPS2?
When setting
export OMP_NUM_THREADS=4
on my machine, my original example runs fine. So I cannot tell with certainty.
I guess it has something to do with the Davidson residual tolerance being too strict. You can use
OptScheme.set_instruction(0, 200, 1e-6, 5, 0.03, 1e-3)
OptScheme.set_instruction(1, 500, 1e-6, 5, 0.03, 1e-4)
OptScheme.set_instruction(2, 1000, 1e-6, 5, 0.01, 1e-5)
OptScheme.set_instruction(3, 1000, 1e-10, 100, 0.00, 1e-6)
instead. Note that the Davidson residual tolerance increases from 1e-3 to 1e-6. In the first sweep, the MPS is far off, and it makes no sense to put convergence criteria too strict. You can play with these entries, in order:
@SebWouters I tried to loosen the tolerance as you suggested. The CheMPS2 calculation seems to take very long (even it hangs) and has not finished yet. FCI calculation was very quick (1-2 minutes). Is there possibly any installation problem with my local version? How long did the calculation in your computer take? Do you have any reference that compares the computational time for FCI and DMRG for those both methods are affordable? Thank you
@hungpham2017
I don't think there's a problem with your installation per se. The only thing I can think of is a different floating point specification (due to OS or compiler), or numerical instability. How do the CheMPS2 binary tests perform? Test2 should also be prone to "hanging" I guess.
The calculation takes only a couple of minutes for all excited states, i.e. less than a minute per state, on my computer.
Regarding comparison of computational efforts: I think starting around 14 orbitals, DMRG becomes more efficient than FCI. For a smaller number of orbitals, it is always best to use FCI.
I'll try to get you timing output a.s.a.p.
Best regards, Sebastian
@hungpham2017
@wpoely86 just let me know that there might be a "hanging" problem in CheMPS2 with OpenMP, as he encountered a similar thing. Can you provide information relating to your system (os, compiler, omp library, ...)?
I installed CheMPS2 on my anaconda environment. I do apologize for long response. I just want to show you all the packages I have installed. In the mean time, I am trying to reinstalled the CheMPS2 manually. You don't need to give me the detailed time ming, if you said it was only a few minutes then probably there i some problem with my local version.
Here are the configuration:
lsb_release -a
LSB Version: :base-4.0-amd64:base-4.0-ia32:base-4.0-noarch:core-4.0-amd64:core-4.0-ia32:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-ia32:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-ia32:printing-4.0-noarch
Distributor ID: CentOS
Description: CentOS release 6.10 (Final)
Release: 6.10
Codename: Final
my anaconda env:
@labc03 [~] % conda list
# packages in environment at /home/gagliard/phamx494/anaconda:
#
# Name Version Build Channel
_ipyw_jlab_nb_ext_conf 0.1.0 py36_0
_tflow_select 2.3.0 mkl
absl-py 0.5.0 py36_0
alabaster 0.7.12 py36_0
anaconda custom py36hbbc8b67_0
anaconda-client 1.7.2 py36_0
anaconda-project 0.8.2 py36_0
asn1crypto 0.24.0 py36_0
astor 0.7.1 py36_0
astroid 2.0.4 py36_0
atomicwrites 1.2.1 py36_0
attrs 18.2.0 py36h28b3542_0
babel 2.6.0 py36_0
backcall 0.1.0 py36_0
backports 1.0 py36_1
backports.functools_lru_cache 1.4 py36_1 conda-forge
backports.os 0.1.1 py36_0
backports.shutil_get_terminal_size 1.0.0 py36_2
backports.weakref 1.0.post1 py36_0
beautifulsoup4 4.6.3 py36_0
bitarray 0.8.3 py36h14c3975_0
blas 1.0 openblas
bleach 1.5.0 py36_0 conda-forge
blosc 1.14.4 hdbcaa40_0
boto 2.49.0 py36_0
bzip2 1.0.6 h14c3975_5
ca-certificates 2018.03.07 0
cairo 1.14.12 h8948797_3
certifi 2018.10.15 py36_0
cffi 1.11.5 py36he75722e_1
chardet 3.0.4 py36_1
chemps2 1.8.7 h8c3debe_2 psi4/label/dev
click 7.0 py36_0
cloog 0.18.0 0
cloudpickle 0.6.1 py36_0
clyent 1.2.2 py36_1
cmake 3.12.3 h011004d_0 conda-forge
colorama 0.4.0 py36_0
conda 4.5.11 py36_0
conda-build 3.16.1 py36_0
conda-env 2.6.0 1
conda-verify 3.1.1 py36_0
contextlib2 0.5.5 py36_0
cryptography 2.3 py36_0 intel
curl 7.61.0 h84994c4_0
cycler 0.10.0 py36_0
cython 0.29 py36hfc679d8_0 conda-forge
cytoolz 0.9.0.1 py36h14c3975_1
dask-core 0.19.4 py36_0
dbus 1.13.2 h714fa37_1
decorator 4.3.0 py36_0
distributed 1.23.3 py36_0
dkh 1.2 1 psi4/label/dev
docutils 0.14 py36_0
eigen 3.3.5 h2d50403_1 conda-forge
entrypoints 0.2.3 py36_2
erd 3.0.6 1 psi4/label/dev
et_xmlfile 1.0.1 py36_0
expat 2.2.6 he6710b0_0
fastcache 1.0.2 py36h14c3975_2
fftw 3.3.8 h470a237_0 conda-forge
filelock 3.0.9 py36_0
flask 1.0.2 py36_1
flask-cors 3.0.6 py36_0
fontconfig 2.13.0 h9420a91_0
freetype 2.9.1 h6debe1e_4 conda-forge
fribidi 1.0.5 h7b6447c_0
future 0.16.0 py36_0
gast 0.2.0 py36_0
gcc-5 5.2.0 1 psi4
gcc-5-mp 5.2.0 0 psi4
gdma 2.2.6 3 psi4/label/dev
get_terminal_size 1.0.0 haa9412d_0
gettext 0.19.8.1 hd7bead4_3
gevent 1.3.7 py36h7b6447c_0
glib 2.56.2 h464dc38_0 conda-forge
glibc214 2.14.1 ha26e528_0 pwwang
glob2 0.6 py36_1
gmp 6.1.2 h6c8ec71_1
gmpy2 2.0.8 py36hc8893dd_2
graphite2 1.3.12 h23475e2_2
greenlet 0.4.15 py36h7b6447c_0
grpcio 1.12.1 py36hdbcaa40_0
gst-plugins-base 1.14.0 hbbd80ab_1
gstreamer 1.14.0 hb453b48_1
h5py 2.8.0 py36ha1f6525_0
harfbuzz 1.9.0 h04dbb29_1 conda-forge
hdf5 1.10.2 hba1933b_1
heapdict 1.0.0 py36_2
html5lib 0.9999999 py36_0 conda-forge
icu 58.2 h9c2bf20_1
idna 2.7 py36_0
imagesize 1.1.0 py36_0
importlib_metadata 0.6 py36_0
intel-openmp 2019.0 118
intelpython 2019.0 2 intel
iomp5 15.0.3 7 psi4
ipykernel 5.1.0 py36h39e3cac_0
ipython 7.0.1 py36h39e3cac_0
ipython_genutils 0.2.0 py36_0
ipywidgets 7.4.2 py36_0
isl 0.12.2 0
isort 4.3.4 py36_0
itsdangerous 0.24 py36_1
jbig 2.1 hdba287a_0
jdcal 1.4 py36_0
jedi 0.13.1 py36_0
jeepney 0.4 py36_0
jinja2 2.10 py36_0
jpeg 9b h024ee3a_2
jsonschema 2.6.0 py36_0
jupyter_client 5.2.3 py36_0
jupyter_console 6.0.0 py36_0
jupyter_core 4.4.0 py36_0
jupyterlab 0.35.1 py36_0
jupyterlab_launcher 0.13.1 py36_0
jupyterlab_server 0.2.0 py36_0
keyring 15.1.0 py36_0
kiwisolver 1.0.1 py36hf484d3e_0
krb5 1.14.6 0 conda-forge
lapack 3.6.1 1 conda-forge
lawrap 0.1 0 psi4/label/dev
lazy-object-proxy 1.3.1 py36h14c3975_2
libarchive 3.3.2 hb43526a_6
libcurl 7.61.0 h1ad7b7a_0
libedit 3.1.20170329 haf1bffa_1 conda-forge
libffi 3.2.1 hd88cf55_4
libgcc 5.2.0 0 msarahan
libgcc-5 5.4.0 2 ostrokach
libgcc-ng 8.2.0 hdf63c60_1
libgfortran 3.0.0 1 conda-forge
libgfortran-ng 7.3.0 hdf63c60_0
libiconv 1.15 h63c8f33_5
libint 1.2.1 0 psi4
libopenblas 0.3.3 h5a2b251_3
libpng 1.6.34 ha92aebf_2 conda-forge
libprotobuf 3.6.0 hdbcaa40_0
libsodium 1.0.16 h1bed415_0
libssh2 1.8.0 h9cfc8f7_4
libstdcxx-ng 8.2.0 hdf63c60_1
libtiff 4.0.9 he85c1e1_2
libtool 2.4.6 h7b6447c_5
libuuid 1.0.3 h1bed415_2
libuv 1.23.2 h470a237_0 conda-forge
libxc 3.0.0 3 psi4
libxcb 1.13 h1bed415_1
libxml2 2.9.8 h26e45fe_1
libxslt 1.1.32 h1312cb7_0
llvm-meta 7.0.0 0 conda-forge
llvmlite 0.25.0 py36hd408876_0
locket 0.2.0 py36_1
lxml 4.2.5 py36hefd8a0e_0
lz4-c 1.8.1.2 h14c3975_0
lzo 2.10 h49e0be7_2
markdown 3.0.1 py36_0
markupsafe 1.0 py36h14c3975_1
matplotlib 3.0.1 h8a2030e_1 conda-forge
matplotlib-base 3.0.1 py36hc039c98_1 conda-forge
mccabe 0.6.1 py36_1
mistune 0.8.4 py36h7b6447c_0
mkl 2019.0 intel_117 intel
mkl-include 2019.0 intel_117 intel
mkl_fft 1.0.6 py36_0 conda-forge
mkl_random 1.0.1 py36_0 conda-forge
more-itertools 4.3.0 py36_0
mpc 1.0.1 0
mpfr 3.1.2 0
mpi 1.0 mpich conda-forge
mpi4py 3.0.0 py36_mpich_3 conda-forge
mpich 3.2.1 h26a2512_5 conda-forge
mpich2 1.4.1p1 0 anaconda
mpmath 1.0.0 py36_2
msgpack-python 0.5.6 py36h6bb024c_1
multipledispatch 0.6.0 py36_0
nbconvert 5.3.1 py36_0
nbformat 4.4.0 py36_0
ncurses 6.1 hfc679d8_1 conda-forge
networkx 2.2 py36_1
ninja 1.8.2 py36h6bb024c_1
nltk 3.3.0 py36_0
nose 1.3.7 py36_2
notebook 5.7.0 py36_0
numba 0.40.0 py36hf8a1672_0 conda-forge
numexpr 2.6.8 py36hf8a1672_0 conda-forge
numpy 1.15.3 py36h99e49ec_0
numpy-base 1.15.3 py36h2f8d375_0
numpydoc 0.8.0 py36_0
olefile 0.46 py36_0
openblas 0.3.3 ha44fe06_1 conda-forge
openmp 7.0.0 h2d50403_0 conda-forge
openpyxl 2.5.8 py36_0
openssl 1.0.2p h14c3975_0
packaging 18.0 py36_0
pandoc 2.2.3.2 0
pandocfilters 1.4.2 py36_1
pango 1.40.14 he752989_2 conda-forge
parso 0.3.1 py36_0
partd 0.3.9 py36_0
patchelf 0.9 he6710b0_3
path.py 11.5.0 py36_0
pathlib2 2.3.2 py36_0
pcmsolver 1.1.10 py36_1 psi4/label/dev
pcre 8.42 h439df22_0
pep8 1.7.1 py36_0
pexpect 4.6.0 py36_0
pickleshare 0.7.5 py36_0
pillow 5.3.0 py36h34e0f95_0
pip 10.0.1 py36_0
pixman 0.34.0 hceecf20_3
pkginfo 1.4.2 py36_1
pluggy 0.7.1 py36h28b3542_0
ply 3.11 py36_0
prometheus_client 0.4.2 py36_0
prompt_toolkit 2.0.6 py36_0
protobuf 3.6.0 py36hf484d3e_0
psutil 5.4.7 py36h14c3975_0
ptyprocess 0.6.0 py36_0
py 1.7.0 py36_0
pybind11 2.2.4 py36hfd86e86_0
pychemps2 1.8.7 py36ha05f3a8_2 psi4/label/dev
pycodestyle 2.4.0 py36_0
pycosat 0.6.3 py36h14c3975_0
pycparser 2.19 py36_0
pycrypto 2.6.1 py36h14c3975_9
pycurl 7.43.0.2 py36hb7f436b_0
pyflakes 2.0.0 py36_0
pygments 2.2.0 py36_0
pylint 2.1.1 py36_0
pyodbc 4.0.24 py36he6710b0_0
pyopenssl 18.0.0 py36_0
pyparsing 2.2.2 py36_0
pyqt 5.6.0 py36_2
pysocks 1.6.8 py36_0
pytest 3.8.2 py36_0
pytest-openfiles 0.3.0 py36_0
pytest-remotedata 0.3.0 py36_0
python 3.6.6 hc3d631a_0
python-dateutil 2.7.3 py36_0
python-libarchive-c 2.8 py36_6
pytz 2018.5 py36_0
pyyaml 3.13 py36h14c3975_0
pyzmq 17.1.2 py36h14c3975_0
qt 5.6.3 h39df351_1
qt5 5.3.1 1 dsdale24
qtawesome 0.5.1 py36_1
qtpy 1.5.1 py36_0
readline 7.0 haf1bffa_1 conda-forge
requests 2.19.1 py36_0
rhash 1.3.6 hb7f436b_0
rope 0.11.0 py36_0
ruamel_yaml 0.15.46 py36h14c3975_0
scikit-learn 0.20.0 py36h22eb022_1
scipy 1.1.0 py36he2b7bc3_1
secretstorage 3.1.0 py36_0
send2trash 1.5.0 py36_0
setuptools 40.4.3 py36_0
simint 0.7 0 psi4
simplegeneric 0.8.1 py36_2
singledispatch 3.4.0.3 py36_0
sip 4.19.8 py36hf484d3e_0
six 1.11.0 py36_1
snappy 1.1.7 hbae5bb6_3
snowballstemmer 1.2.1 py36_0
sortedcollections 1.0.1 py36_0
sortedcontainers 2.0.5 py36_0
sphinx 1.8.1 py36_0
sphinxcontrib 1.0 py36_1
sphinxcontrib-websupport 1.1.0 py36_1
spyder-kernels 0.2.6 py36_0
sqlalchemy 1.2.12 py36h7b6447c_0
sqlite 3.25.2 hb1c47c0_0 conda-forge
sympy 1.3 py36_0
tbb 2019.1 intel_0 intel
tblib 1.3.2 py36_0
tensorboard 1.10.0 py36_0 conda-forge
tensorflow 1.10.0 py36_0 conda-forge
termcolor 1.1.0 py36_1
terminado 0.8.1 py36_1
testpath 0.4.2 py36_0
tk 8.6.8 hbc83047_0
toolz 0.9.0 py36_0
tornado 5.1.1 py36h7b6447c_0
tqdm 4.26.0 py36h28b3542_0
traitlets 4.3.2 py36_0
typed-ast 1.1.0 py36h14c3975_0
typing 3.6.4 py36_0
unicodecsv 0.14.1 py36_0
unixodbc 2.3.7 h14c3975_0
urllib3 1.23 py36_0
wcwidth 0.1.7 py36_0
webencodings 0.5.1 py36_1
werkzeug 0.14.1 py36_0
wheel 0.32.1 py36_0
widgetsnbextension 3.4.2 py36_0
wrapt 1.10.11 py36h14c3975_2
xlrd 1.1.0 py36_1
xlsxwriter 1.1.1 py36_0
xlwt 1.3.0 py36_0
xz 5.2.4 h14c3975_4
yaml 0.1.7 had09818_2
zeromq 4.2.5 hf484d3e_1
zict 0.1.3 py36_0
zlib 1.2.11 ha838bed_2
zstd 1.3.3 h84994c4_0
@hungpham2017, I spot some really old packages and mixed channels in that conda environment. conda has undergone a substantial upgrade of their underlying toolchain. In particular, the gcc & the iomp5 can be updated, the mkl can be got from a more consistent channel, and it's always a bad idea to have both openblas and mkl installed in the same env. Psi4 has seen some weird behaviour with that.
I recommend a new conda env conda create -n nuchemps2 python=3.6 pychemps2 -c psi4
. That should get you gcc 7.2 or 7.3 and mkl from the defaults (not intel or conda-forge) channel. Pretty much only chemps2
and pychemps
should be from a non-default channel.
Thank you very much @loriab and @SebWouters , actually it worked after I installed it in a new environment. Before I am using the gcc-5 because the interactive queue at our HPC doesn't support the new libc.so.6 that required for gcc-7. I guess I need to figure out the best way to solver these independent and not messing up with some codes I already installed.
Hello Seb,
I am trying to compare the PySCF/FCI and CheMPS2/DMRG solver for excited state calculations. When I executed the script over and over, the FCI is stable while the DMRG solver gave random orders of excited state energies, and even sometimes give an excited state energy for the ground state (one root calculation). What would be the possible reason? Maybe there were anything wrong in the way I used it. Thank you, Hung
Here is the script: