Closed miroi closed 5 years ago
Well, on the main node the application is running (there we have g++ 6.3 due to installed devtoolset ):
milias@login.grid.umb.sk:~/Work/open-collection/theoretical_chemistry/software_runs/lammps/runs/melt/.mpirun -np 4 /home/milias/Work/qch/software/lam
mps/lammps_stable/src/lmp_mpi -in in.melt
LAMMPS (7 Aug 2019)
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
Created orthogonal box = (0 0 0) to (16.796 16.796 16.796)
1 by 2 by 2 MPI processor grid
Created 4000 atoms
create_atoms CPU = 0.000918659 secs
Neighbor list info ...
update every 20 steps, delay 0 steps, check no
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 2.8
ghost atom cutoff = 2.8
binsize = 1.4, bins = 12 12 12
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair lj/cut, perpetual
attributes: half, newton on
pair build: half/bin/atomonly/newton
stencil: half/bin/3d/newton
bin: standard
Setting up Verlet run ...
Unit style : lj
Current step : 0
Time step : 0.005
Per MPI rank memory allocation (min/avg/max) = 2.706 | 2.706 | 2.706 Mbytes
Step Temp E_pair E_mol TotEng Press
0 3 -6.7733681 0 -2.2744931 -3.7033504
50 1.6754119 -4.7947589 0 -2.2822693 5.6615925
100 1.6503357 -4.756014 0 -2.2811293 5.8050524
150 1.6596605 -4.7699432 0 -2.2810749 5.7830138
200 1.6371874 -4.7365462 0 -2.2813789 5.9246674
250 1.6323462 -4.7292021 0 -2.2812949 5.9762238
Loop time of 0.34736 on 4 procs for 250 steps with 4000 atoms
Performance: 310916.549 tau/day, 719.714 timesteps/s
92.4% CPU use with 4 MPI tasks x no OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 0.22404 | 0.23568 | 0.25712 | 2.6 | 67.85
Neigh | 0.027579 | 0.028371 | 0.029572 | 0.5 | 8.17
Comm | 0.04958 | 0.072869 | 0.084434 | 5.1 | 20.98
Output | 0.0003319 | 0.00036758 | 0.00042612 | 0.0 | 0.11
Modify | 0.0057379 | 0.0059653 | 0.006437 | 0.4 | 1.72
Other | | 0.004107 | | | 1.18
Nlocal: 1000 ave 1010 max 982 min
Histogram: 1 0 0 0 0 0 1 0 0 2
Nghost: 2703.75 ave 2713 max 2689 min
Histogram: 1 0 0 0 0 0 0 2 0 1
Neighs: 37915.5 ave 39239 max 36193 min
Histogram: 1 0 0 0 0 1 1 0 0 1
Total # of neighbors = 151662
Ave neighs/atom = 37.9155
Neighbor list builds = 12
Dangerous builds not checked
Total wall time: 0:00:00
milias@login.grid.umb.sk:~/Work/open-collection/theoretical_chemistry/software_runs/lammps/runs/melt/.mpirun --version
mpirun (Open MPI) 4.0.1
Report bugs to http://www.open-mpi.org/community/help/
milias@login.grid.umb.sk:~/Work/open-collection/theoretical_chemistry/software_runs/lammps/runs/melt/.mpiCC --version
g++ (GCC) 6.3.1 20170216 (Red Hat 6.3.1-3)
Copyright (C) 2016 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Set PMIX_MCA_gds=hash
in your environment - that should fix the problem.
Yes, this helped ! Many thanks, I am closing this issue as SOLVED.
Hello,
my OpenMPI applications are crashing on our cluster, we do not know if this is due to an old linux kernel. Here is the info:
OpenMPI installed as
Error by running application:
and on comp04 node the g++ version is lower: