Closed FrankLee-1987 closed 9 years ago
Thanks Sergey for your clarification..Can I use the same library size for my data?
It's pretty tolerant of incorrect insert sizes because most assemblers and metAMOS will re-estimate the insert size. However, I would recommend giving as close an estimate as you know for your data. Most Illumina inserts are 500-800bp so I'd recommend 200:800 as the range if you have Illumina data.
Thanks. Do we need to execute just test run_Pipeline_test.sh or all the *test.sh files?
Because for some of the *test.sh files. I am getting errors.
The run_pieline_test.sh tests the core functionality. Some of the other tests require optional components and will give errors if they are not installed. Depending on what you installed when you ran python INSTALL.py (i.e. if you included iMetAMOS) you can also test run_sra.sh and run_ima.sh as tests of you rinstallation.
I need help in understanding the following
1) what does this 500:3500 in this command refer to ? "../initPipeline -f -m carsonella_pe_filt.fna -d test1 -i 500:3500" 2) I want to customize the software my dataset. Assemble (MetaVelvet,SOAPdenovo2 ), FindORFS(MetaGeneMark), Validate (QUAST), Annotate(FCP). I hope this is the runpipeline parameters I need to provide for above condition runPipeline -a MetaVelvet,soap -c FCP -g MetaGeneMark -X quast -p 15 -d test1 -k 55 -f Assemble,MapReads,FindORFS,Annotate,FunctionalAnnotation,Propagate,Classify,Abundance,FindScaffoldORFS -n FunctionalAnnotation 3) Then what's the significance of workflows like core, iMetAMOS, optional, deprecated. When to use these workflows? 4) So what i understood is run_pipeline_test.sh is not invoking any workflow. It is a customized analysis.