t-neumann / slamdunk

Streamlining SLAM-seq analysis with ultra-high sensitivity
GNU Affero General Public License v3.0
38 stars 23 forks source link

error when running count dunk #103

Closed jonwruffin closed 3 years ago

jonwruffin commented 3 years ago

I get the following error when running the count dunk (all previous dunks successfully completed):

(slammer2) jonwruffin@DESKTOP-DGJU5CE:/mnt/e/slamseq_run_9_9_2021$ slamdunk count -o /mnt/e/slamseq_run_9_9_2021/count -s /mnt/e/slamseq_run_9_9_2021/snp/ -r /mnt/e/slamseq_run_9_9_2021/hg19_no_alt_analysis_set.fa -b /mnt/e/slamseq_run_9_9_2021/pure_UTR_3_hg19_ensemble.bed /mnt/e/slamseq_run_9_9_2021/filter/*_slamdunk_mapped_filtered.bam

Running slamDunk tcount for 36 files (1 threads) Traceback (most recent call last): File "/home/jonwruffin/anaconda3/envs/slammer2/bin/slamdunk", line 10, in sys.exit(run()) File "/home/jonwruffin/anaconda3/envs/slammer2/lib/python3.9/site-packages/slamdunk/slamdunk.py", line 516, in run results = Parallel(n_jobs=n, verbose=verbose)(delayed(runCount)(tid, args.bam[tid], args.ref, args.bed, args.maxLength, args.minQual, args.conversionThreshold, outputDirectory, snpDirectory, vcfFile) for tid in range(0, len(args.bam))) File "/home/jonwruffin/anaconda3/envs/slammer2/lib/python3.9/site-packages/joblib/parallel.py", line 1041, in call if self.dispatch_one_batch(iterator): File "/home/jonwruffin/anaconda3/envs/slammer2/lib/python3.9/site-packages/joblib/parallel.py", line 859, in dispatch_one_batch self._dispatch(tasks) File "/home/jonwruffin/anaconda3/envs/slammer2/lib/python3.9/site-packages/joblib/parallel.py", line 777, in _dispatch job = self._backend.apply_async(batch, callback=cb) File "/home/jonwruffin/anaconda3/envs/slammer2/lib/python3.9/site-packages/joblib/_parallel_backends.py", line 208, in apply_async result = ImmediateResult(func) File "/home/jonwruffin/anaconda3/envs/slammer2/lib/python3.9/site-packages/joblib/_parallel_backends.py", line 572, in init self.results = batch() File "/home/jonwruffin/anaconda3/envs/slammer2/lib/python3.9/site-packages/joblib/parallel.py", line 262, in call return [func(*args, *kwargs) File "/home/jonwruffin/anaconda3/envs/slammer2/lib/python3.9/site-packages/joblib/parallel.py", line 262, in return [func(args, **kwargs) File "/home/jonwruffin/anaconda3/envs/slammer2/lib/python3.9/site-packages/slamdunk/slamdunk.py", line 202, in runCount tcounter.computeTconversions(ref, bed, inputSNP, bam, maxLength, minQual, outputCSV, outputBedgraphPlus, outputBedgraphMinus, conversionThreshold, log) File "/home/jonwruffin/anaconda3/envs/slammer2/lib/python3.9/site-packages/slamdunk/dunks/tcounter.py", line 165, in computeTconversions for utr in BedIterator(bed): File "/home/jonwruffin/anaconda3/envs/slammer2/lib/python3.9/site-packages/slamdunk/utils/BedReader.py", line 81, in next return self._toBED(self._bedFile.next()) File "/home/jonwruffin/anaconda3/envs/slammer2/lib/python3.9/site-packages/slamdunk/utils/BedReader.py", line 67, in _toBED bedEntry.start = int(cols[1]) IndexError: list index out of range

I initially ran "slamdunk all" and it ran into this error on the count step, rerunning the count step separately produces the same error message. Any advice on how to proceed? Thanks in advance.

jonwruffin commented 3 years ago

To add to this, I get the same error when running alleyoop utrrates.

t-neumann commented 3 years ago

Hi,

looks like there's something wrong with the bed file - how does it look?

jonwruffin commented 3 years ago

Fixed. The BED file had a header, removing it solves the problem.