ai-se / ZheYu

Zhe's works
0 stars 0 forks source link

HPC #5

Open azhe825 opened 8 years ago

azhe825 commented 8 years ago

Submit mpi job on HPC Clusters.

bsub -W 6000 -n 8 -o ./out/out.%J -e ./err/err.%J mpiexec -n 8 /share2/zyu9/miniconda/bin/python2.7 HPC_Zhe_norm.py

azhe825 commented 8 years ago

Create a run.sh:

#! /bin/tcsh

rm out/*
rm err/*

foreach VAR (drupal academia apple gamedev rpg english electronics physics tex scifi SE0 SE1 SE2 SE3 SE4 SE5 SE6 SE7 SE8 SE9 SE10 SE11 SE12 SE13 SE14)
  bsub -q standard -W 2400 -n 8 -o ./out/$VAR.out.%J -e ./err/$VAR.err.%J mpiexec -n 8 /share2/zyu9/miniconda/bin/python2.7 HPC_Zhe_smote.py _main $VAR
end

chmod 775 run.sh ./run.sh

azhe825 commented 8 years ago

Upload: scp filename zyu9@login01.hpc.ncsu.edu:/share2/zyu9/ scp -r folder/ zyu9@login01.hpc.ncsu.edu:/share2/zyu9/

timm commented 8 years ago

so you were speculating you could get a 64 fold speed up 8*8

i had thought it should be 16_8 (16 processes per node) but, hey, i'll take 8_8

azhe825 commented 8 years ago

I was considering 25(jobs) * 8(processors), but it seems like 25 jobs will be a queue. Need to check it tonight.

timm commented 8 years ago

25 times 8 = 200

i'd be happy with "merely" a 100 fold speed up

t

azhe825 commented 8 years ago

View Queue: bqueues -u zyu9 bqueues -u zyu9 -l

azhe825 commented 8 years ago

bashCommand = "cwm --rdf test.rdf --ntriples > test.nt" import subprocess process = subprocess.Popen(bashCommand.split(), stdout=subprocess.PIPE) output = process.communicate()[0]

azhe825 commented 6 years ago

For ARC:

Login: ssh -i password UnityId@arc.csc.ncsu.edu