Open a4000 opened 1 year ago
I would add it into modules/nf-core
, then either use nf-core modules patch
to remove the meta map or, probably even easier, just add a fake meta map to the channel before entering the unmodified blastn module.
About the issues:
I can't see why one would have to patch the module to remove the meta map. One can just create a meta map on the fly -- usually, just the id
entry is needed -- before calling the module. (Edit: Sorry, hadn't seen that you already proposed this, @d4straub.)
Otherwise, I agree with @d4straub, including his comments for point 3: I think it's a matter of properly staging the directory in which the blast database is located or will be created in. I see there is already a module for makeblastdb
, so I suppose it's just a matter of calling that the correct way.
Come to think of it: We're already using VSEARCH, which is basically doing the same thing as BLASTN but at least an order of magnitude faster -- can't that be used instead?
I haven't tried using VSEARCH for classification, but I'll give it a try using it tomorrow. If it does the same thing, but faster, then the only argument I still have for adding blastn is for cases when someone wants to use a blast database. For example, if someone is working on a supercomputer that stores and manages a blast nt database.
Hi, I think a strong case can be made for incorporating blast
as an option rather than only keeping VSEARCH,
as @a4000 has suggested.
Increasingly there will be a need for re-analysis of data, especially when longitudinal bio-monitoring projects will become more standard practice in e.g. marine environments. A lot of data sets already created had taxonomic annotation performed via blast. One would at least want to be able to recreate those results in a backward compatible manner. As there is already an nf-core module available, incorporation should not require a huge amount of work?
Imho, re-analysis from scratch of relevant data might be more helpful than attempting to use old methods to make data comparable ("old" doesnt mean here particularly blast, but all steps that come with analysing raw data). And ampliseq supports large data batches.
As there is already an nf-core module available, incorporation should not require a huge amount of work?
The implementation effort just to run blast on ASVs should be indeed relatively little. But the module covers only a small snippet of whats needed (and the easiest, imho). But I would be happy to be proven wrong!
Description of feature
There is already an nf-core module for blastn. It just just needs to be modified to be more compatible with Ampliseq (mostly by removing the part of the module that expects a meta map). If we do modify an nf-core module, I'm not sure if we should put the module in the
modules/nf-core
ormodules/local
sub-directories. There are three main obstacles I can see with adding this module to Ampliseq.1) There would also need to be a makeblastdb step for cases when the user doesn't already have a local blast database. 2) I'm not sure how compatible the blastn output files are with the other downstream steps of Ampliseq. If that is an issue, we may need a post blastn step to modify the output files. 3) When using a local blast database in Nextflow, depending on the local directory structure, the docker run command may need to bind a directory to a volume. My way of getting around this issue is to have a bind_dir parameter with null as the default, then in the config file I use the -v option in docker if the bind_dir parameter is not null.