I might be in the wrong place to ask, so excuse my ignorance. If this is the case, I'll delete asap my comment. We recently got a cluster with an LSF operation system, and I am struggling with the following issue.
I have 100 files, and I want to parallelise my submission to save time instead of running jobs one by one. How can I change this script to a Job-array in LSF using bsub submission system?
#BSUB -J ExampleJob1 #Set the job name to "ExampleJob1"
#BSUB -L /bin/bash #Uses the bash login shell to initialize the job's execution environment.
#BSUB -W 2:00 #Set the wall clock limit to 2hr
#BSUB -n 1 #Request 1 core
#BSUB -R "span[ptile=1]" #Request 1 core per node.
#BSUB -R "rusage[mem=5000]" #Request 5000MB per process (CPU) for the job
#BSUB -M 5000 #Set the per process enforceable memory limit to 5000MB.
#BSUB -o Example1Out.%J #Send stdout and stderr to "Example1Out.[jobID]"
path=./home/
for each in *.bam
do
samtools coverage ${each} -o ${each}_coverage.txt
done
Thank you for your time; any help is appreciated. I am a starter at LSF and quite confused.
I might be in the wrong place to ask, so excuse my ignorance. If this is the case, I'll delete asap my comment. We recently got a cluster with an LSF operation system, and I am struggling with the following issue.
I have 100 files, and I want to parallelise my submission to save time instead of running jobs one by one. How can I change this script to a Job-array in
LSF
usingbsub
submission system?Thank you for your time; any help is appreciated. I am a starter at
LSF
and quite confused.