uwhpsc-2016 / lectures

Notes, slides, and code from the in-class lectures.
7 stars 21 forks source link

MPI on SMC #24

Open cswiercz opened 8 years ago

cswiercz commented 8 years ago

...from a course announcement.

Number of Cores on SMC

Currently, each student should have access to four CPUs on SMC. These CPUs are dynamically allocated to a student project when parallel code is executed so you if you're interested in getting somewhat accurate timing results you should try running your OpenMP and MPI code multiple times. This is, in part, because of the overhead in trying to find available CPUs for your processes on SMC as well as the fact that many other people are also using SMC and those cores may not be available.

Note that the more traffic on SMC the more likely your code will run slowly during timing tests. Also note that traffic tends to increase closer to homework deadlines.

The final disclaimer is that x4 CPUs / Student * 100 Students = 400 CPUs. The SMC server on which our projects live doesn't actually have 400 CPUs so we'll have to share. Worst case scenario (and I hope this doesn't happen) is that we'l have to reduce the number of available CPUS per student to two.

Using MPI on SMC

Already some enterprising students are trying out MPI code on SMC. That's great! They also uncovered some minor, now resolved, bugs in the SMC environment that, I swear, were not there several weeks ago when I tested out MPI on SMC.

For now, instead of running

$ mpiexec -n 4 ./a.out  # or whatever number of processes and executable

instead run

$ mpiexec.mpich -n 4 ./a.out

To briefly explain: Ubuntu's default OpenMPI version has some bugs. We'll use Argonne Natl' Lab's "mpich", instead, which is installed on SMC but needs to explicitly called.

I was told by the SMC folks that this will be made the default at some point within our projects. I will let you know when that happens.

Chris