Closed d4straub closed 5 years ago
I needed to specify --reads "data/*_L001_R{1,2}_001.fastq.gz"
instead of --reads "data"
--> reopened #8
Two warning massages appeared:
WARN: Access to undefined parameter readPaths
-- Initialise it to a default value eg. params.readPaths = some_value
WARN: Singularity cache directory has not been defined -- Remote image will be stored in the path: /beegfs/work/bcgsd01/M653/Martyna/work/singularity
Warnings can be ignored, I will add an initialization step for readPaths, that should get rid of number 1. The second one is to be ignored in general :-)
process make_SILVA_132_16S_classifier (which we dont test usually) raises an error: .command.sh: line 2: unzip: command not found :(
skipping that step for now using existing classifier with --classifier
and -resume
Process output_documentation
terminated with an error exit status (1)
Command error: Loading required package: markdown Error in readLines(con) : cannot open the connection Calls: markdownToHTML ... renderMarkdown -> tryCatch -> tryCatchList -> readLines In addition: Warning message: In readLines(con) : cannot open file 'output.md': No such file or directory Execution halted
UNzip should be present now in the container, I updated the dev
branch accordingly
Process
output_documentation
terminated with an error exit status (1)Command error: Loading required package: markdown Error in readLines(con) : cannot open the connection Calls: markdownToHTML ... renderMarkdown -> tryCatch -> tryCatchList -> readLines In addition: Warning message: In readLines(con) : cannot open file 'output.md': No such file or directory Execution halted
That might be a missing file issue. We'd need to login to binac then and check what happens when we try to access the path to your work directory from inside the singularity container. Might be also issue with mount paths etc.... which singularity version are we using there?
mv /home/tu/bcgsd01/.nextflow/assets/nf-core/rrna-ampliseq/docs/output.md output.md
works but
module load devel/singularity/3.0.1 singularity shell work/singularity/nfcore-rrna-ampliseq-latest.img mv /home/tu/bcgsd01/.nextflow/assets/nf-core/rrna-ampliseq/docs/output.md output.md mv: cannot stat '/home/tu/bcgsd01/.nextflow/assets/nf-core/rrna-ampliseq/docs/output.md': No such file or directory
also
module unload devel/singularity/3.0.1 module load devel/singularity/2.6.0 singularity pull docker://nfcore/rrna-ampliseq:latest singularity shell rrna-ampliseq-latest.simg mv /home/tu/bcgsd01/.nextflow/assets/nf-core/rrna-ampliseq/docs/output.md output.md mv: cannot stat '/home/tu/bcgsd01/.nextflow/assets/nf-core/rrna-ampliseq/docs/output.md': No such file or directory
is that what you wanted to know?
Exactly!!
This is on BinAC right?
Yeah, I assume its a bit weird in terms of that we can't map these directories...
Yeah. Try it on our Cluster @d4straub
I started analysis on new cfc, dont really expect complete results before Monday.
What does Monday say ? 🥇
ERROR ~ Error executing process > 'metadata_category_all (1)'
Caused by:
Process metadata_category_all (1)
terminated with an error exit status (127)
Command error:
singularity: error while loading shared libraries: libseccomp.so.2: cannot open shared object file: No such file or directory
Edit: this occured using module qbic/singularity_slurm/3.0.1, new trial using qbic/singularity_slurm/2.6
Why did this happen ?!?!
Don't even know where this is coming from....
Could very well be a Singularity 3.0.1 issue unfortunately :-(
same with Singularity 2.6, also fastqc is failing... weird terminal.txt
Started it again with Singularity 2.5.2:
ERROR ~ Error executing process > 'output_documentation'
Caused by:
Process output_documentation
terminated with an error exit status (127)
Command executed:
markdown_to_html.r output.md results_description.html
Command error:
singularity: error while loading shared libraries: libseccomp.so.2: cannot open shared object file: No such file or directory
Edit: terminal output attached terminal.txt
Edit2: Seems to be independent of the singularity version and multiple processes are affected...
WTF, we tested locally using -profile test,docker
multiple times
Should test on a workstation again, I fear binac has some module issues again...
this is on our cfc...
It sounds/looks a bit like a singularity error, copying in @mseybold for clarification
It seems its missing libccomp
on our servers to run this. Weird, that this never happened before.
[b1/06b254] Submitted process > dada_trunc_parameter
[71/815a21] Submitted process > dada_single (230,229)
[b6/19d627] Submitted process > classifier (1)
[70/872706] Submitted process > filter_taxa (mitochondria,chloroplast)
[fe/3ecb36] Submitted process > RelativeAbundanceReducedTaxa (1)
[28/1b919f] Submitted process > prepare_ancom (usethis,Filter_Pore_Size,Date_Collected)
[02/b7b9e0] Submitted process > barplot (1)
[87/6572ac] Submitted process > export_filtered_dada_output (1)
[59/8e41e1] Submitted process > RelativeAbundanceASV (1)
[c7/082b3d] Submitted process > tree (1)
[6f/8700e6] Submitted process > ancom_asv (Filter_Pore_Size)
[73/3b9fe1] Submitted process > ancom_asv (Date_Collected)
[49/b2dc39] Submitted process > ancom_asv (usethis)
[ad/2e7b1f] Submitted process > ancom_tax (Date_Collected-level6)
[07/adf7af] Submitted process > ancom_tax (Date_Collected-level2)
[4f/2571f0] Submitted process > ancom_tax (Filter_Pore_Size-level3)
[b1/698df2] Submitted process > ancom_tax (Filter_Pore_Size-level2)
[0c/0de9d3] Submitted process > ancom_tax (Date_Collected-level3)
[45/1a709b] Submitted process > ancom_tax (Filter_Pore_Size-level5)
[af/d0f615] Submitted process > ancom_tax (usethis-level6)
[5d/a5d821] Submitted process > ancom_tax (Date_Collected-level5)
[81/ad7260] Submitted process > ancom_tax (usethis-level3)
[4d/935129] Submitted process > ancom_tax (Filter_Pore_Size-level4)
[46/3024e4] Submitted process > ancom_tax (Filter_Pore_Size-level6)
[72/7db1ed] Submitted process > ancom_tax (Date_Collected-level4)
[b7/2b633b] Submitted process > ancom_tax (usethis-level5)
[2b/e66948] Submitted process > ancom_tax (usethis-level4)
[a8/247281] Submitted process > ancom_tax (usethis-level2)
[b1/997a4c] Submitted process > report_filter_stats (1)
[d3/17ee0d] Submitted process > combinetable (1)
[cb/041b64] Submitted process > alpha_rarefaction (1)
[5c/46fdb3] Submitted process > diversity_core (1)
Use the sampling depth of 51275 for rarefaction
[04/39c34a] Submitted process > alpha_diversity (observed_otus_vector)
[75/abcfd2] Submitted process > beta_diversity_ordination (bray_curtis_pcoa_results)
[5a/89d7b3] Submitted process > beta_diversity_ordination (unweighted_unifrac_pcoa_results)
[9b/8ac1a8] Submitted process > beta_diversity_ordination (jaccard_pcoa_results)
[24/cd637f] Submitted process > alpha_diversity (faith_pd_vector)
[3d/1ac9e4] Submitted process > alpha_diversity (shannon_vector)
[e5/277db2] Submitted process > alpha_diversity (evenness_vector)
[b9/5082e5] Submitted process > beta_diversity (weighted_unifrac_distance_matrix)
[f9/d74058] Submitted process > beta_diversity (unweighted_unifrac_distance_matrix)
[91/dee73f] Submitted process > beta_diversity (jaccard_distance_matrix)
[a6/6a3bd4] Submitted process > beta_diversity (bray_curtis_distance_matrix)
[64/8c0f4a] Submitted process > beta_diversity_ordination (weighted_unifrac_pcoa_results)
Looks to me as if running pretty well on a bigger test-dataset!
Ran through on test system thor
- how about your test on cfc
@d4straub ?
processes dada_single & make_SILVA_132_16S_classifier are queuing since yesterday, but processes that were failing before succeeded this time (output_documentation, metadata_category_all).
I guess we could assume then this is resolved?
I assume it is. ;)
test with real data on binac