Open newplay opened 8 months ago
Yes, these lines in the code are to ensure that the user isn't trying to evaluate the model on materials that are not compatible with it. This is because the neural network model depends on specific details of the material, including atom types, "spinful", and angular momenta of atomic orbitals. Regarding your problem, what material are you trying to evaluate your model on? Please check these three properties and make sure they are the same as those specified in the dataset_info.json
under model_path/src/
.
I used the Monolayer_MoS2_Demo from your database to compare Openmx and DeepH-E3. When I ran a simple MoS2 example(unit cell), I encountered this error. However, when I ran the example for a 2x2 supercell of MoS2, it ran successfully. I initially thought you wanted to avoid potential issues arising from the accuracy of results due to pre-training in the training data (such as the correct law being overemphasized because it has been trained), but it seems I was mistaken.
Can you let me see the input file to the OpenMX calculation of the unit cell?
Sorry for my delayed response. Here is the content of the 'openmx_in.dat' file:
##################################################
System.Name MoS2
DATA.PATH /home/zjlin/openmx3.9/DFT_DATA19
HS.fileout on
Species.Number 2
<Definition.of.Atomic.Species
Mo Mo7.0-s3p2d2 Mo_PBE19
S S7.0-s2p2d1 S_PBE19
Definition.of.Atomic.Species>
Atoms.Number 3
Atoms.SpeciesAndCoordinates.Unit FRAC
<Atoms.SpeciesAndCoordinates
1 Mo 0.3333329856 0.6666669846 0.3719750881 7.0 7.0 0.0 0.0 0.0 0.0 0
2 S 0.6666669848 0.3333329858 0.4502193077 3.0 3.0 0.0 0.0 0.0 0.0 0
3 S 0.6666669848 0.3333329858 0.2937308686 3.0 3.0 0.0 0.0 0.0 0.0 0
Atoms.SpeciesAndCoordinates>
Atoms.UnitVectors.Unit Ang
<Atoms.UnitVectors
3.1903159618000001 0.0000000000000000 0.0000000000000000
-1.5951581104000001 2.7628940436999998 0.0000000000000000
0.0000000000000000 0.0000000000000000 20.0000000000000000
Atoms.UnitVectors>
scf.XcType GGA-PBE # LDA/LSDA-CA/LSDA-PW/GGA-PBE
scf.ElectronicTemperature 0.0 # default=300 (K) SIGMA in VASP
scf.energycutoff 300 # default=150 (Ry = 13.6eV)
scf.maxIter 2000
scf.EigenvalueSolver Band # DC/DC-LNO/Krylov/ON2/Cluster/Band
scf.Kgrid 6 6 1
scf.criterion 4e-08 # (Hartree = 27.21eV)
scf.partialCoreCorrection on
scf.SpinPolarization off
scf.SpinOrbit.Coupling off
scf.Mixing.Type RMM-DIISK
scf.Init.Mixing.Weight 0.3
scf.Mixing.History 30
scf.Mixing.StartPulay 6
scf.Mixing.EveryPulay 1
1DFFT.EnergyCutoff 3600
1DFFT.NumGridK 900
1DFFT.NumGridR 900
scf.ProExpn.VNA off
MD.Type Nomd # Nomd (SCF) / NVT_NH (MD)
Band.dispersion on
Band.Nkpath 3
<Band.kpath
30 0.000 0.000 0.000 0.500 0.000 0.000 \Gamma M
30 0.500 0.000 0.000 0.333333333 0.333333333 0.000 \M K
30 0.333333333 0.333333333 0.000 0.000 0.000 0.000 \K Gamma
Band.kpath>
### END ###
########################################
Additionally, the training dataset is from the original database in your connection, Database 1 Best regards, TzuChing
I tried running a test calculation using your input file and I didn't find any problem with DeepH-E3, except that I changed the System.Name
entry in your input file from MoS2
to openmx
. This is because DeepH-E3 parses openmx.scfout
file to get Hamiltonian and overlap matrices by default. And do not forget to run cat openmx.out >> openmx.scfout
before running DeepH-E3. If the problem still persists, please let me know.
Thank you for your assistance. I will attempt this case again. I believe you are correct; I forgot to execute cat openmx.out >> openmx.scfout
before get overlap-matrix
and evalue
by DeepH-E3.
Dear developer,
I am attempting to replicate the 'Monolayer-MoS2' demo, and during the evaluation process, I encountered an error with the message
self.dataset_info == net_out_info_o.dataset_info
. The complete error traceback is as follows:Upon inspecting the source code in
kernel.py
, I noticed that the information comes from themodel_path/src/
directory. The content of thedataset_info.json
file is as follows:The
target_blocks.json
is assumed to be correct, so I haven't provided the details.I identified the lines where the error originates:
So, does this imply that the
dataset_info
from the model I trained is same as the dataset I intend to evaluate?