Open azhe825 opened 8 years ago
Create a run.sh:
#! /bin/tcsh
rm out/*
rm err/*
foreach VAR (drupal academia apple gamedev rpg english electronics physics tex scifi SE0 SE1 SE2 SE3 SE4 SE5 SE6 SE7 SE8 SE9 SE10 SE11 SE12 SE13 SE14)
bsub -q standard -W 2400 -n 8 -o ./out/$VAR.out.%J -e ./err/$VAR.err.%J mpiexec -n 8 /share2/zyu9/miniconda/bin/python2.7 HPC_Zhe_smote.py _main $VAR
end
chmod 775 run.sh ./run.sh
Upload: scp filename zyu9@login01.hpc.ncsu.edu:/share2/zyu9/ scp -r folder/ zyu9@login01.hpc.ncsu.edu:/share2/zyu9/
so you were speculating you could get a 64 fold speed up 8*8
i had thought it should be 16_8 (16 processes per node) but, hey, i'll take 8_8
I was considering 25(jobs) * 8(processors), but it seems like 25 jobs will be a queue. Need to check it tonight.
25 times 8 = 200
i'd be happy with "merely" a 100 fold speed up
t
View Queue: bqueues -u zyu9 bqueues -u zyu9 -l
bashCommand = "cwm --rdf test.rdf --ntriples > test.nt" import subprocess process = subprocess.Popen(bashCommand.split(), stdout=subprocess.PIPE) output = process.communicate()[0]
Submit mpi job on HPC Clusters.
bsub -W 6000 -n 8 -o ./out/out.%J -e ./err/err.%J mpiexec -n 8 /share2/zyu9/miniconda/bin/python2.7 HPC_Zhe_norm.py