Numerical hessian and nac vector calculations distribute numerical calculation to N process, each running with M threads.
For MPI case:
export OMP_NUM_THREADS=M
mpirun -np N openqp input
For OMP case:
set [hess]nproc=N in input
export OMP_NUM_THREADS=M
openqp input
Other changes: decoupling mpi4py and molecule class as much as possible.
Adding mpi decorator to mpi_util.py:
mpi_get_attr()
synchronize the return values on all ranks
mpi_update_attr()
synchronize mol.data on all ranks
mpi_write()
control mpi logging depending on rank, accept *args
mpi_dump()
control mpi logging depending on rank, accept *args, **kwargs
Numerical hessian and nac vector calculations distribute numerical calculation to N process, each running with M threads. For MPI case: export OMP_NUM_THREADS=M mpirun -np N openqp input
For OMP case: set [hess]nproc=N in input export OMP_NUM_THREADS=M openqp input
Other changes: decoupling mpi4py and molecule class as much as possible. Adding mpi decorator to mpi_util.py: mpi_get_attr()
synchronize the return values on all ranks
mpi_update_attr()
synchronize mol.data on all ranks
mpi_write()
control mpi logging depending on rank, accept *args
mpi_dump()
control mpi logging depending on rank, accept *args, **kwargs