Closed ksenia-krasheninnikova closed 9 months ago
nf-core lint
overall result: Passed :white_check_mark: :warning:Posted for pipeline commit c9d0d05
+| ✅ 129 tests passed |+
#| ❔ 19 tests were ignored |#
!| ❗ 5 tests had warnings |!
For the record, the nf-core guidelines for reviewing a pipeline release (for which this PR is a precursor) are https://nf-co.re/docs/contributing/pipeline_release_review_guidelines
I have tried to run the test and got some warning messages:
outdir
in nextflow.config
like others: outdir = "./results"
params.genomes
being used in several Groovy files, the related parts should be deleted. No need such param for this pipeline.params.enable_conda
being used in a local module modules/local/gfa_to_fasta.nf
. This line should be updated like other modules.modules.config
, some configs should be under certain conditions like the conditions in the workflow. Can we add the condition here as well. Lots of warning message may confuse people.
WARN: There's no process matching config selector: .*ORGANELLES_READS:MITOHIFI_MITOHIFI
WARN: There's no process matching config selector: .*ORGANELLES_CONTIGS:MITOHIFI_MITOHIFI
WARN: There's no process matching config selector: .*GENOME_STATISTICS_POLISHED:GFASTATS_PRI
WARN: There's no process matching config selector: .*GENOME_STATISTICS_POLISHED:GFASTATS_HAP
WARN: There's no process matching config selector: .*GENOME_STATISTICS_POLISHED:BUSCO
WARN: There's no process matching config selector: .*GENOME_STATISTICS_POLISHED:MERQURYFK_MERQURYFK
I have problems to run the test profile on my laptop and the farm(+sanger profile), the job always being killed because the memory. Any way we can config more memory for the processes.
I can confirm with the extra config added to test profile, the pipeline run successfully on my laptop with Docker and on the farm with Singularity.
4. In
modules.config
, some configs should be under certain conditions like the conditions in the workflow. Can we add the condition here as well. Lots of warning message may confuse people. @gq1 Do you know of any example of it in other pipelines?
- In
modules.config
, some configs should be under certain conditions like the conditions in the workflow. Can we add the condition here as well. Lots of warning message may confuse people. @gq1 Do you know of any example of it in other pipelines?
I don't know the correct way to do it. But I tried something like this and works.
if(params.align){
withName: '.*:PROCESS_NAME' {
//
}
}
@gq1 @muffato thank you for the review! Could you please specify what job was killed on farm with the default resource settings and how did you have to change the config to get it work? That's an interesting finding, as the test profile works fine for me on farm and as far as I'm concerned for @muffato too. Otherwise the pipeline should be ready for review now.
@gq1 @muffato thank you for the review! Could you please specify what job was killed on farm with the default resource settings and how did you have to change the config to get it work? That's an interesting finding, as the test profile works fine for me on farm and as far as I'm concerned for @muffato too. Otherwise the pipeline should be ready for review now.
I don't have problems to run like this:
nextflow run genomeassembly/ -profile test_github -c genomeassembly/conf/hifiasm_test.config
But I have problems to run like this:
nextflow run genomeassembly/ -profile test_github
Just tried on the farm again, it seems fine now but it still failed on my laptop even if after 5 times trying.
[53/a10456] NOTE: Process `SANGERTOL_GENOMEASSEMBLY:GENOMEASSEMBLY:RAW_ASSEMBLY:HIFIASM_HIC (baUndUnlc1)` terminated with an error exit status (137) -- Execution is retried (5)
[af/76220f] NOTE: Process `SANGERTOL_GENOMEASSEMBLY:GENOMEASSEMBLY:RAW_ASSEMBLY:HIFIASM_PRI (baUndUnlc1)` terminated with an error exit status (137) -- Execution is retried (5)
@gq1
hifiasm is quite a heavy software to run.
Do you think there are 6Gb free memory on your laptop as specified in conf/test.config
? On my laptop currently 14Gb is used leaving 2GB free.
I have 32G memory on my laptop, 18G being used.
It is fine now because it is successful on the farm.
Summary for me to run different tests:
nextflow run genomeassembly -profile test_github,docker -c conf/hifiasm_test.config
: OKnextflow run genomeassembly -profile test_github,docker
: 137 exit code everywhere on laptop, VM and larger Github runner( memory upto 128G )nextflow run genomeassembly -profile test_github,docker -c conf/hifiasm_test.config --organelles_on
: Github runner may get 429 HTTP error, NCBI API key may be needed. Sometimes other HTTP 4xx errors.nextflow run genomeassembly -profile test_github,sanger,singularity
: OKnextflow run genomeassembly -profile test,sanger,singularity --organelles_on
: OK but one job only succeeded on 4th trying.nextflow run genomeassembly -profile test_github,sanger,singularity --organelles_on
: failed both on the farm and Github runners
mitohifi.py -r baUndUnlc1.fasta \
-f NC_065463.1.fasta \
-g NC_065463.1.gb \
-o 5 \
-t 2
Command error:
No gbk.HiFiMapped.bam.filtered.assembled.[a/p]_ctg.gfa file(s).
An error may have occurred when assembling reads with HiFiasm.
PR checklist
nf-core lint
).nextflow run . -profile test,docker --outdir <OUTDIR>
).docs/usage.md
is updated.docs/output.md
is updated.CHANGELOG.md
is updated.README.md
is updated (including new tool citations and authors/contributors).