Closed skose82 closed 1 year ago
Please update your nextflow version, that might help. I assume the invalid process definition is caused by the old nextflow version. I did think that ampliseq is supposed to raise a warning/error when using such an old nextflow version, because here version 21.10.3 is minimum.
Thank you d4straub, this did the trick, we had to update the module on our HPC instance.
The pipeline ran fine until the end with dump software versions, would you know what this could be about?
-[nf-core/ampliseq] Pipeline completed with errors-
Error executing process > 'NFCORE_AMPLISEQ:AMPLISEQ:CUSTOM_DUMPSOFTWAREVERSIONS (1)'
Caused by:
Process `NFCORE_AMPLISEQ:AMPLISEQ:CUSTOM_DUMPSOFTWAREVERSIONS (1)` terminated with an error exit status (1)
Command executed [/home/user1/.nextflow/assets/nf-core/ampliseq/./workflows/../modules/nf-core/custom/dumpsoftwareversions/templates/dumpsoftwareversions.py]:
#!/usr/bin/env python
import yaml
import platform
from textwrap import dedent
def _make_versions_html(versions):
html = [
dedent(
"""\
<style>
#nf-core-versions tbody:nth-child(even) {
background-color: #f2f2f2;
}
</style>
<table class="table" style="width:100%" id="nf-core-versions">
<thead>
<tr>
<th> Process Name </th>
<th> Software </th>
<th> Version </th>
</tr>
</thead>
"""
)
]
for process, tmp_versions in sorted(versions.items()):
html.append("<tbody>")
for i, (tool, version) in enumerate(sorted(tmp_versions.items())):
html.append(
dedent(
f"""\
<tr>
<td><samp>{process if (i == 0) else ''}</samp></td>
<td><samp>{tool}</samp></td>
<td><samp>{version}</samp></td>
</tr>
"""
)
)
html.append("</tbody>")
html.append("</table>")
return "\n".join(html)
versions_this_module = {}
versions_this_module["NFCORE_AMPLISEQ:AMPLISEQ:CUSTOM_DUMPSOFTWAREVERSIONS"] = {
"python": platform.python_version(),
"yaml": yaml.__version__,
}
with open("collated_versions.yml") as f:
versions_by_process = yaml.load(f, Loader=yaml.BaseLoader) | versions_this_module
# aggregate versions by the module name (derived from fully-qualified process name)
versions_by_module = {}
for process, process_versions in versions_by_process.items():
module = process.split(":")[-1]
try:
assert versions_by_module[module] == process_versions, (
"We assume that software versions are the same between all modules. "
"If you see this error-message it means you discovered an edge-case "
"and should open an issue in nf-core/tools. "
)
except KeyError:
versions_by_module[module] = process_versions
versions_by_module["Workflow"] = {
"Nextflow": "22.10.6",
"nf-core/ampliseq": "2.4.1",
}
versions_mqc = {
"id": "software_versions",
"section_name": "nf-core/ampliseq Software Versions",
"section_href": "https://github.com/nf-core/ampliseq",
"plot_type": "html",
"description": "are collected at run time from the software output.",
"data": _make_versions_html(versions_by_module),
}
with open("software_versions.yml", "w") as f:
yaml.dump(versions_by_module, f, default_flow_style=False)
with open("software_versions_mqc.yml", "w") as f:
yaml.dump(versions_mqc, f, default_flow_style=False)
with open("versions.yml", "w") as f:
yaml.dump(versions_this_module, f, default_flow_style=False)
Command exit status:
1
Command output:
(empty)
Command error:
Traceback (most recent call last):
File ".command.sh", line 54, in <module>
versions_by_process = yaml.load(f, Loader=yaml.BaseLoader) | versions_this_module
TypeError: unsupported operand type(s) for |: 'dict' and 'dict'
Work dir:
/data/user1/amp/work/86/7e8d57277695cbf8fa5ffee5dc7cad
Tip: you can try to figure out what's wrong by changing to the process work dir and showing the script file named `.command.sh`
Unfortunately not, this particular process is part of the nf-core template. TypeError: unsupported operand type(s) for |: 'dict' and 'dict'
seems to indicate again some sort of software problem, i.e. the environment you use to execute the process is inappropriate. I guess this might be again a conda problem (consider using singularity or any other real container system, conda is last resort only!). You can experiment by adding to your command -c new_env.config
and new_env.config
can contains:
process {
withName: CUSTOM_DUMPSOFTWAREVERSIONS { conda = "bioconda::multiqc=1.14" }
}
this will change the conda env from 1.13 (see current here) to 1.14, potentially helping with the problem. Disclaimer: not tested, just speculating
Thank you for your reply. I have tried with singularity, but it has thus far gone through most of the steps and stalled on the qiime2 section. Where it pulls a possibly outdated version of qiime2 (currently it's at 2022.11) with the pull 2022.8. I'm not sure if that's the issue, but it has been building the SIF file for quite a long time. System params are 24 CPUS at 8GB per CPU.
Error executing process > 'NFCORE_AMPLISEQ:AMPLISEQ:QIIME2_INTAX (ASV_tax_species.tsv)'
Caused by:
Failed to pull singularity image
command: singularity pull --name quay.io-qiime2-core-2022.8.img.pulling.1676562590618 docker://quay.io/qiime2/core:2022.8 > /dev/null
status : 143
message:
time="2023-02-16T15:49:50Z" level=warning msg="\"/run/user/19153\" directory set by $XDG_RUNTIME_DIR does not exist. Either create the directory or unset $XDG_RUNTIME_DIR.: stat /run/user/19153: no such file or directory: Trying to pull image in the event that it is a public image."
INFO: Converting OCI blobs to SIF format
INFO: Starting build...
time="2023-02-16T15:49:51Z" level=warning msg="\"/run/user/19153\" directory set by $XDG_RUNTIME_DIR does not exist. Either create the directory or unset $XDG_RUNTIME_DIR.: stat /run/user/19153: no such file or directory: Trying to pull image in the event that it is a public image."
time="2023-02-16T15:49:52Z" level=warning msg="\"/run/user/19153\" directory set by $XDG_RUNTIME_DIR does not exist. Either create the directory or unset $XDG_RUNTIME_DIR.: stat /run/user/19153: no such file or directory: Trying to pull image in the event that it is a public image."
Getting image source signatures
Copying blob sha256:42c077c10790d51b6f75c4eb895cbd4da37558f7215b39cbf64c46b288f89bda
Copying blob sha256:1a23c9d790a34c5bb13dbaf42e0ea2a555e089aefed7fdfa980654f773b39b39
Copying blob sha256:22a6fc63b9b529f00082379be512f0ca1c7a491872396994cf59b47e794c5e09
Copying blob sha256:42b7f294ddbda82da5a69b0675429a15dba0766bd64bafb23d78f809c5de8b5a
Copying blob sha256:1ee3d7358a92f1712f27fc911035fac4651ad6b3f7c97da8cc38a3b78f5b074c
Copying blob sha256:e6062fa5f610cc620655ed8b2fb29958b3727f948528bc6a402e9de3922a92a1
Copying blob sha256:97eeee145658c1d01efaf2797bf58fa5a2ff10a93e12f000545da61332b491dd
Copying blob sha256:b5ca682aa46ae8c65f085739ab2b482f712bb8394c428774f8fa8eca86ee8cd3
Copying blob sha256:f243d33467c7dccdc960f779c896627b806c24930e555c031a50b4d0f7e2fab9
Copying blob sha256:6a4d753ac330f9bc7ecf4e77b9c4e44a4b93c4aaa1fe37fd585c1b419fbd0ad8
Copying blob sha256:1ad759e143f36f80d4ea718efc85b40a7d80b75818d9869e027263682c6e89c8
Copying blob sha256:83ab021118e2a67cf71929bea0b9cec8c0008705406ded76519f703876b35b01
Copying blob sha256:6c22f43930cb8d2bfa59b408c25d67f0ac8f9c803d2bc4b38393195c6c006157
Copying blob sha256:f8eac0b5854d0fc2929ca318afc25a7501c4fd3463ba0d36ed5242e1f3d34aff
Copying blob sha256:206e727c2a9c92d5417ea7191e25da7ff36d884a864027ed57e11c858319c372
Copying blob sha256:3d51d16b3fd67df4d938c7514279ebd51b62d17abc3aee75ca2e36e3fa87341b
Copying config sha256:636582997d9636e249957f5de4a5d4acc17863d030c99da8c1f3a0664455e773
Writing manifest to image destination
Storing signatures
2023/02/16 15:52:52 info unpack layer: sha256:42c077c10790d51b6f75c4eb895cbd4da37558f7215b39cbf64c46b288f89bda
2023/02/16 15:52:55 info unpack layer: sha256:1a23c9d790a34c5bb13dbaf42e0ea2a555e089aefed7fdfa980654f773b39b39
2023/02/16 15:53:00 info unpack layer: sha256:22a6fc63b9b529f00082379be512f0ca1c7a491872396994cf59b47e794c5e09
2023/02/16 15:53:07 info unpack layer: sha256:42b7f294ddbda82da5a69b0675429a15dba0766bd64bafb23d78f809c5de8b5a
2023/02/16 15:53:07 info unpack layer: sha256:1ee3d7358a92f1712f27fc911035fac4651ad6b3f7c97da8cc38a3b78f5b074c
2023/02/16 15:53:22 info unpack layer: sha256:e6062fa5f610cc620655ed8b2fb29958b3727f948528bc6a402e9de3922a92a1
2023/02/16 15:53:22 info unpack layer: sha256:97eeee145658c1d01efaf2797bf58fa5a2ff10a93e12f000545da61332b491dd
2023/02/16 15:53:22 info unpack layer: sha256:b5ca682aa46ae8c65f085739ab2b482f712bb8394c428774f8fa8eca86ee8cd3
2023/02/16 15:53:22 info unpack layer: sha256:f243d33467c7dccdc960f779c896627b806c24930e555c031a50b4d0f7e2fab9
2023/02/16 15:57:21 info unpack layer: sha256:6a4d753ac330f9bc7ecf4e77b9c4e44a4b93c4aaa1fe37fd585c1b419fbd0ad8
2023/02/16 15:57:21 info unpack layer: sha256:1ad759e143f36f80d4ea718efc85b40a7d80b75818d9869e027263682c6e89c8
2023/02/16 15:57:21 info unpack layer: sha256:83ab021118e2a67cf71929bea0b9cec8c0008705406ded76519f703876b35b01
2023/02/16 15:57:21 info unpack layer: sha256:6c22f43930cb8d2bfa59b408c25d67f0ac8f9c803d2bc4b38393195c6c006157
2023/02/16 15:57:21 info unpack layer: sha256:f8eac0b5854d0fc2929ca318afc25a7501c4fd3463ba0d36ed5242e1f3d34aff
2023/02/16 15:57:21 info unpack layer: sha256:206e727c2a9c92d5417ea7191e25da7ff36d884a864027ed57e11c858319c372
2023/02/16 16:05:37 info unpack layer: sha256:3d51d16b3fd67df4d938c7514279ebd51b62d17abc3aee75ca2e36e3fa87341b
INFO: Creating SIF file...
stalled on the qiime2 section.
the container is a few GB, so it takes time
Where it pulls a possibly outdated version of qiime2
it pulls the correct version (which is not the latest)
Your error message includes time="2023-02-16T15:49:51Z" level=warning msg="\"/run/user/19153\" directory set by $XDG_RUNTIME_DIR does not exist.
might be a improperly set up singularity, that might be the reason, I am not sure though.
The best first test is probably to pull the container manually into folder work/singularity
(if you didnt overwrite singularity chache dir) with singularity pull --name quay.io-qiime2-core-2022.8.img docker://quay.io/qiime2/core:2022.8
as indicated in your error message. That will provide the singularity image for the pipeline and it wont be downloaded any more (because its there). Hence the pipeline would use the existing image and succeed hopefully.
Since the original issue (compilation error) was solved, I close that issue. But you are welcome to open new issues and/or join the nf-core slack.
Thank you very much - that fixed the problem.
Description of the bug
Hi there,
I'm attempting to run ampliseq on my HPC (CentOS) instance but get a module compilation issue. This issue does not arise on an ubuntu instance.
fastqc works fine outside the pipeline along with other dependencies. Perhaps I'm missing something really obvious? Metadata files and Sample IDs match etc. Any help would be greatly appreciated.
conda version: conda 4.9.2
conda list:
Command used and terminal output