bioinfologics / satsuma2

FFT cross-correlation based synteny aligner, (re)designed to make full use of parallel computing
41 stars 13 forks source link

not shutting down all slaves #29

Open Andrew-N-Black opened 4 years ago

Andrew-N-Black commented 4 years ago

Hello, I'm attempting to identify and remove any sex scaffolds from a reference genome by running the following command through SLURM:

#!/bin/bash
#SBATCH --job-name=cris_scaffold
#SBATCH -A nameless
#SBATCH -t 14-00:00:00
#SBATCH -N 1
#SBATCH --cpus-per-task=10

SatsumaSynteny2 -t ../X.fasta -q ./reference_genome.fa -o chrX -slaves 10 -threads 10

After modifying the "satsuma_run.sh" file to:

# SLURM systems
echo "#!/bin/sh" > slurm_tmp.sh
echo srun $2 >> slurm_tmp.sh
sbatch -A nameless -t 600 -c $3 -J $5 -o ${5}.log --mem ${4}G slurm_tmp.sh

I'm running into issues running MergeXCorrMatches

Merging XCorr matches: 
  /home/satsuma2/bin//MergeXCorrMatches -i chrY/xcorr_aligns.final.out -q ./GCF_000181335.3_Felis_catus_9.0_genomic.fna_10k.fa -t ../Y.fasta -o chrY/satsuma_summary.chained.out > chrY/MergeXCorrMatches.chained.out
**Shutting down all slavesJoining Workqueue thread**

The job just spins at this step (was stuck all night)

The output from the final SL9 log file lists:

worker created, now to work!!!
== Processing finished, waiting for the slaves to die ==
TIME SPENT WORKING: 0

Any advice? I've tried adjusting the number of threads and how memory is allocated by SLURM. I keep on running into this issue. The test data set runs just fine.

Thank you, -Andrew