Installation on batch systems (eg. using Torque/PBS)
configure with --with-batch (see PETSc Installation FAQ)
make as instructed in the last lines of configuration output
run conftest<...> binary on compute node (see below how to setup a PBS job)
run Python reconfigure<...> script produced by previous step on master node (i.e. directly, without submitting it to the queueing system)
Setting up Torque/PBS batch jobs
(qsub for submitting batch jobs, qstat for viewing job queue and status) :
qsub test.sh where in the bash script we specify the PBS commands as comments:
#!/bin/sh#PBS -N test # job name
#PBS -l nodes=2:ppn=1 # # compute nodes and processes per node
#PBS -e test.err # name of error output file
#PBS -o test.log # name of output log file
cd $PBS_O_WORKDIR # move working directory to the one containing this file
mpirun <mpi_ready_binary>
Installation on batch systems (eg. using Torque/PBS)
configure
with--with-batch
(see PETSc Installation FAQ)make
as instructed in the last lines of configuration outputconftest<...>
binary on compute node (see below how to setup a PBS job)reconfigure<...>
script produced by previous step on master node (i.e. directly, without submitting it to the queueing system)Setting up Torque/PBS batch jobs (
qsub
for submitting batch jobs,qstat
for viewing job queue and status) :qsub test.sh
where in the bash script we specify the PBS commands as comments:#!/bin/sh
#PBS -N test
# job name#PBS -l nodes=2:ppn=1
# # compute nodes and processes per node#PBS -e test.err
# name of error output file#PBS -o test.log
# name of output log filecd $PBS_O_WORKDIR
# move working directory to the one containing this filempirun <mpi_ready_binary>
┆Issue is synchronized with this Asana task