Closed colindaven closed 3 years ago
yeah thats a relly good idea, with lots of big samples it can take quite long
I created a runbatch_bed_to_csv script thats starts the run_bed_to_csv script with sbatch for every camld.bam file. So all bam files can be processed parallel. I dont know why but when I used srun instead of sbatch I got an error: slurmstepd-hpc-rc05: error: execve(): run_bed_to_csv.sh: Permission denied srun: error: hpc-rc05: task 0: Exited with exit code 13
I tried to copy the srun part from the runbatch_wochenende_plot.sh script. Should I push my scripts to dev so you can take a look at them?
I pushed it now. But its weird, the same error I described above I now got when running runbatch_metagen_awk_filter.sh.... I don't know why
I solved these srun (not sbatch) submission issues, but am seeing another bug here. The variable is not being read properly.
run_bed_to_csv.sh: line 60: 4122256 Segmentation fault (core dumped) python3 bed_to_pos_csv.py -i "${bed%.unfiltered.bed}".bed -p .
Now solved with https://github.com/MHH-RCUG/Wochenende/pull/161
Hi @poer-sophia
to speed up growth rate bam -> bed calculation,
we could specify an input file for
run_bed_to_csv.sh
then use a runbatch script wrapper around that with srun to paralellize via slurm.
What do you think ? Would need to move the link creation scripts too.