psathyrella / partis

B- and T-cell receptor sequence annotation, simulation, clonal family and germline inference, and affinity prediction
GNU General Public License v3.0
57 stars 34 forks source link

Assertion error when annotating #309

Closed NikaAb closed 3 years ago

NikaAb commented 4 years ago

Dear partis team, I have an assertation error while running the annotation command on my dataset. This is the output :


# ./partis annotate  --infname /host/home/Documents/FASTA/P7.fa --outfname /host/home/Documents/Partis/P7.yaml --n-procs 8
  parameter dir does not exist, so caching a new set of parameters before running action 'annotate': _output/_host_home_Documents_FASTA_P7
caching parameters
  vsearch: 290190 / 291381 v annotations (1191 failed) with 171 v genes in 37.2 sec
    keeping 37 / 204 v genes
smith-waterman  (new-allele fitting)
  vsearch: 290186 / 291381 v annotations (1195 failed) with 37 v genes in 93.8 sec
    running 8 procs for 291381 seqs
    running 11 procs for 4845 seqs
Traceback (most recent call last):
  File "./partis", line 742, in <module>
    args.func(args)
  File "./partis", line 250, in run_partitiondriver
    parter.run(actions)
  File "/partis/python/partitiondriver.py", line 120, in run
    self.action_fcns[tmpaction]()
  File "/partis/python/partitiondriver.py", line 269, in cache_parameters
    self.run_waterer(dbg_str='new-allele fitting')
  File "/partis/python/partitiondriver.py", line 216, in run_waterer
    waterer.run(cachefname if write_cachefile else None)
  File "/partis/python/waterer.py", line 108, in run
    self.read_output(base_outfname, len(mismatches))
  File "/partis/python/waterer.py", line 487, in read_output
    self.summarize_query(qinfo)  # returns before adding to <self.info> if it thinks we should rerun the query
  File "/partis/python/waterer.py", line 974, in summarize_query
    assert qname in self.vs_indels
AssertionError

Thanks a lot!

psathyrella commented 4 years ago

oh my goodness, that shouldn't be able to happen. If it'd be possible to pass me your input file so I can reproduce the crash that would make it faster to fix this, but if not I can probably poke around and figure out why.

NikaAb commented 4 years ago

Dear @psathyrella, Sorry for bothering you. I send you my input file. Thank you so much! P7.zip

psathyrella commented 4 years ago

lol, I think it's more like sorry my partis code is broken ;-)

I'll have a look, thanks.

psathyrella commented 4 years ago

ok should be fixed. If you're pulling from docker hub give it a half hour or so for the build to finish https://hub.docker.com/r/psathyrella/partis/builds.

NikaAb commented 3 years ago

Dear @psathyrella, Thank you so much for your help. It works perfectly for that fast file, I have another weird dataset P5.zip that gives me this error, this time is during the partition (maybe I should ask about it in another issue?) :

Traceback (most recent call last):
  File "./partis", line 805, in <module>
    args.func(args)
  File "./partis", line 261, in run_partitiondriver
    parter.run(actions)
  File "/partis/python/partitiondriver.py", line 125, in run
    self.action_fcns[tmpaction]()
  File "/partis/python/partitiondriver.py", line 541, in partition
    cpath = self.cluster_with_bcrham()
  File "/partis/python/partitiondriver.py", line 769, in cluster_with_bcrham
    cpath, _, _ = self.run_hmm('forward', self.sub_param_dir, n_procs=n_procs, partition=cpath.partitions[cpath.i_best_minus_x], shuffle_input=True)  # note that this annihilates the old <cpath>, which is a memory optimization (but we write all of them to the cpath progress dir)
  File "/partis/python/partitiondriver.py", line 1323, in run_hmm
    self.execute(cmd_str, n_procs)
  File "/partis/python/partitiondriver.py", line 1098, in execute
    utils.run_cmds(cmdfos, batch_system=self.args.batch_system, batch_options=self.args.batch_options, batch_config_fname=self.args.batch_config_fname, debug='print' if self.args.debug else None)
  File "/partis/python/utils.py", line 3499, in run_cmds
    status = finish_process(iproc, procs, n_tries_list[iproc], cmdfos[iproc], n_max_tries, dbgfo=cmdfos[iproc].get('dbgfo'), batch_system=batch_system, debug=debug, ignore_stderr=ignore_stderr, clean_on_success=clean_on_success, allow_failure=allow_failure)
  File "/partis/python/utils.py", line 3617, in finish_process
    raise Exception(failstr)
Exception: exceeded max number of tries (1 >= 1) for subprocess with command:
        /partis/packages/ham/bcrham --algorithm forward --hmmdir /partis/bin/_output/_host_home_nika_ab_Documents_FASTA_P5/hmm/hmms --datadir /tmp/partis-work/hmms/250276/germline-sets --infile /tmp/partis-work/hmms/250276/istep-0/hmm-5/hmm_input.csv --outfile /tmp/partis-work/hmms/250276/istep-0/hmm-5/hmm_output.csv --locus igh --random-seed 1605959680 --only-cache-new-vals --input-cachefname /tmp/partis-work/hmms/250276/istep-0/hmm-5/hmm_cached_info.csv --output-cachefname /tmp/partis-work/hmms/250276/istep-0/hmm-5/hmm_cached_info.csv --partition --max-logprob-drop 5.0 --hamming-fraction-bound-lo 0.015 --hamming-fraction-bound-hi 0.080485740646 --logprob-ratio-threshold 18.0 --biggest-naive-seq-cluster-to-calculate 15 --biggest-logprob-cluster-to-calculate 15 --n-partitions-to-write 10 --ambig-base N
        stderr:           /tmp/partis-work/hmms/250276/istep-0/hmm-5/err
            bcrham: _build/state.cc:95: void ham::State::RescaleOverallMuteFreq(double): Assertion `old_mute_freq > 0.' failed.

May the force be with you for answering my questions ( 〃..)

psathyrella commented 3 years ago

haven't managed to run on this yet, but hopefully can get to it tomorrow.

psathyrella commented 3 years ago

Ok well I cached parameters and partitioned that sample without getting the error. So, hmmm. That bit of code hasn't been touched in 3 years, and it's been run on a lot of weird real data in that time, so I'm inclined to suspect it's something like the hmm model files in the parameter dir got corrupted (maybe parameter caching didn't finish properly or there were two procs writing to the same dir or something?). But I also don't remember how that bit of code works at all (and I can't really poke through it without being able to replicate the error), so it's definitely possible that you're finding some weird edge case.

So I ran these commands

bd=/fh/fast/matsen_e/dralph/partis/issue-309/P5

./bin/partis cache-parameters --infname $bd/P5.fa --parameter-dir $bd/parameters --n-procs 20
./bin/partis partition --infname $bd/P5.fa --parameter-dir $bd/parameters --n-procs 25 --outfname $bd/partition.yaml >/fh/fast/matsen_e/dralph/partis/issue-309/P5/partition-2.log

with commit caad68bc9cca8ccf82beb58b2b869ee34588848e

and I've copied the input + parameters + output + logs to here (will probably delete after a week or two).

If it still happens for you and you can pass the exact commands + std out + version it would be great to figure out what's going on tho.

NikaAb commented 3 years ago

Dear @psathyrella, Thanks, it works now on all of my weird real datasets. I will probably receive more challenging datasets soon, and I'll let you know if there is an error. Thank you again for your help. I wish you an excellent new year, with more fun and less GitHub issues :)

psathyrella commented 3 years ago

woooooo great!

And to you as well!

And issues are great, I figure for each person that bothers to make an issue about something, there were probably three that got annoyed, cursed my name, then gave up.