Closed appassionate closed 1 year ago
Hi Xiong, Thanks for your interest. Finally I renamed this fix style as pimd/langevin. You can use it by installing the develop branch of lammps from its official repository. Also, the document can be found here: https://docs.lammps.org/latest/fix_pimd.html
If you need further assistance, you can email me.
Best, Yifan
On Wed, Jun 14, 2023 at 5:12 AM xiong ke @.***> wrote:
Hi! im trying to compling the fix dp_pimd lammps plugin mentioned in the Overview.md. It seems that branch: beads in your forked lammps has no dp_pimd implementation yet, and i found it at branch: master. I have successfully complied the lammps binary in branch:master, but when im trying to execute it in our cluster, it got those following errors...
cd ./DP-PIMD/rho=1_nvt mpirun -np 4 ~/software/lammps_master/src/lmp_mpi -p 4x1 -in in.lj_nvt -log log -screen screen```
Setting up Path-Integral ... Fatal error in MPI_Allreduce: Invalid buffer pointer, error stack: MPI_Allreduce(1555): MPI_Allreduce(sbuf=0x3be8898, rbuf=0x3be8898, count=1, MPI_DOUBLE, MPI_SUM, MPI_COMM_WORLD) failed MPI_Allreduce(1509): Buffers must not be aliased Fatal error in MPI_Allreduce: Invalid buffer pointer, error stack: MPI_Allreduce(1555): MPI_Allreduce(sbuf=0x36589e8, rbuf=0x36589e8, count=1, MPI_DOUBLE, MPI_SUM, MPI_COMM_WORLD) failed MPI_Allreduce(1509): Buffers must not be aliased
Is it some errors in MPI communication? i guess...
Many thanks! : )
— Reply to this email directly, view it on GitHub https://github.com/Yi-FanLi/DP-PIMD/issues/1, or unsubscribe https://github.com/notifications/unsubscribe-auth/AJMTTWIBJ7S7XSIAWS62KNTXLF56TANCNFSM6AAAAAAZGBOF7Q . You are receiving this because you are subscribed to this thread.Message ID: @.***>
@Yi-FanLi Many thanks for your hint!
I have tried to complie the newest lammps software version March 28
within deepmd-kit 2.2.0
, it can run now.
Hi! im trying to compile the
fix dp_pimd
lammps plugin mentioned in theOverview.md
. It seems thatbranch: beads
in your forked lammps has nodp_pimd
implementation yet, and i found it atbranch: master
. I have successfully complied the lammps binary inbranch:master
, but when im trying to execute it in our cluster, it got those following errors...we are using
Is it some errors in MPI communication? i guess...
Many thanks! : )