knights-lab / SHOGUN

SHallow shOtGUN profiler
GNU Affero General Public License v3.0
54 stars 19 forks source link

Running SHOGUN according to documentation fails with large datasets #27

Open SJohnsonMayo opened 5 years ago

SJohnsonMayo commented 5 years ago

Hi all,

I'm trying to run SHOGUN on ~340 shallow shotgun samples, and running it as intended in the documentation doesn't seem to be working. As far as I can tell, BURST is instantly segfaulting when I give it the combined_seqs.fna (667GB) that I get from shi7. Is there any reason why SHOGUN can't be run on the individual samples with the resulting tables joined at the end?

Thanks,

Stephen

GabeAl commented 5 years ago

That will also work! You can combine the OTU tables afterwards. Typically this yields a slight decrease in quality because the capitalist redistribution algorithm won't be able to see all the queries at once to pick the minimal reference set, but with so many samples things should wash out in most cases.

Cheerio, Gabe

On Tue, Nov 19, 2019, 2:24 PM SJohnsonMayo notifications@github.com wrote:

Hi all,

I'm trying to run SHOGUN on ~340 shallow shotgun samples, and running it as intended in the documentation doesn't seem to be working. As far as I can tell, BURST is instantly segfaulting when I give it the combined_seqs.fna (667GB) that I get from shi7. Is there any reason why SHOGUN can't be run on the individual samples with the resulting tables joined at the end?

Thanks,

Stephen

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/knights-lab/SHOGUN/issues/27?email_source=notifications&email_token=AB5NOBU62BHJLZGNB7ZT66TQUQ4QXA5CNFSM4JPIFG42YY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4H2OEKSQ, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB5NOBUGUTUMOKZXU2SF2K3QUQ4QXANCNFSM4JPIFG4Q .

SJohnsonMayo commented 5 years ago

Makes sense, thanks so much for the fast response!