Closed elizabethmcd closed 1 year ago
nf-core lint
overall result: Failed :x:Posted for pipeline commit 32bb083
+| ✅ 130 tests passed |+
#| ❔ 15 tests were ignored |#
!| ❗ 15 tests had warnings |!
-| ❌ 9 tests failed |-
Do you want me to review this even though tests are failing?
The large issue with not including profiles for Singularity is that many university/government HPC systems do not allow Docker and only Singularity. On those systems you can use singularity tools to convert docker images to singularity, but that would require a bit of work on the users end if they are trying to use our workflows. I don't know how to reconcile providing those profiles in our workflows without testing them. I'm not sure how nf-core handles this if they are going through and testing that each profile works or if this is handled at the module level where you declare the image/environment used.
My interpretation of the profiles is that especially if we're mostly building pipelines from nf-core modules, it's a light lift to maintain all of the profiles. Is this wrong @elizabethmcd?
That's my understanding is that building from nf-core modules and implementing their practices it's a light lift to maintain those profiles. The question becomes if we need to make a new module do we first go through the nf-core modules repo and PR process before then implementing that in one of our workflows. I think this becomes relevant with the new --with-wave
functionality that you can build, deploy, and test the container when launching the workflow so in that case we would only be testing Docker images and the docker profile.
The question becomes if we need to make a new module do we first go through the nf-core modules repo and PR process before then implementing that in one of our workflows.
What does this mean? Like do you mean if we should first publish our submodules to nf-core
before incorporating to our own? If so, that would be crazy. That seems unnecessarily convoluted unless we are doing something with the explicit goal of contributing to the nf-core
repository.
it's a light lift to maintain all of the profiles
I don't think this is true. Especially if we end up building bunch of new modules as part of our pipelines (most likely the case). For pipelines that we pull from nf-core
, we don't have a lot of visibility into how the containers nf-core
provides works. For Docker containers, we'll most likely catch any issues, because we'll be actively using these. But I worry that we won't catch issues with un-used parts of the profiles (since there are so many!) and the added profiles will only create friction and tech-debt.
nf-core
before incorporating into our workflows, and yes I agree this is convoluted. I think if we have a new module we develop it internally and maybe once stable and could be useful for the community it goes through the process of being contributed to nf-core
. This way we avoid lag times with PRs through nf-core
nf-core
should we keep those extra profiles? Or rip them out regardless? Another point to consider - if we create containers should we be contributing these back to repos like BioContainers? My vote is yes but this requires some extra work and guidance.
This pull request adds modules, subworkflows, and a workflow for QC, assembly, and mapping of short metagenomic reads.
Nf-core Modules:
Local Modules (the nf-core modules of these don't work for downstream purposes):
Subworkflows:
This is implemented in the metagenomic-sr ("short read") workflow.