Closed krdav closed 8 years ago
Funny. The vsearch partitioning seems to be working fine, though I find it slightly confusing the use "err:" in the screen output when there does not seem to be any error.
out: vsearch v1.1.3_linux_x86_64, 2.0GB RAM, 1 cores https://github.com/torognes/vsearch err: Reading file /tmp/root/hmms/550327/simu.fasta 0% .... .... .... Writing clusters 100% Clusters: 294 Size min 1, max 605, avg 17.0 Singletons: 90, 1.8% of seqs, 30.6% of clusters vsearch/swarm time: 121.7 total time: 1061.1
ah, great, thanks for submitting.
So what the upper exception is saying is that it couldn't parse the info it needs from one of the jobs. So it prints all of the stdout and stderr -- which says the job was killed. Without knowing what kind of system you're on, I can't be sure what killed it, but if you're on a batch system there's a lot of things, maybe it was killed by hand? Maybe it ran out of memory?
Hm, yeah, maybe I should change it to stderr instead of err?
This was run with the docker container on my PC and indeed I think you are right about the memory. I was simply not aware that partis is using so much memory.
Now I will try on a 32 core node with 1TB RAM, then hopefully memory will not be an issue...
Okay, provided enough memory partis runs to completion with no problems to report. So I guess the lesson is not to run jobs with large number of sequences on a PC.
So I was running some VH sequences for testing both the annotation and the partitioning. With 500 sequences both annotation and partitioning works just fine. With 5000 sequences the annotation still works fine but the partitioning crashes with the following error:
The file I ran this on is attached: some_seqs2.fa.zip