nf-core / proteomicslfq

Proteomics label-free quantification (LFQ) analysis pipeline
https://nf-co.re/proteomicslfq
MIT License
32 stars 19 forks source link

Some errors while processing a QC HeLa file #100

Open rolivella opened 3 years ago

rolivella commented 3 years ago

Hi,

Congratulations for the 1.0.0 release! I'm testing it with a QC HeLa sample but I got some errors. Could you help me? Thanks!

N E X T F L O W  ~  version 20.07.1
Launching `nf-core/proteomicslfq` [nauseous_perlman] - revision: eb5f7a004c [dev]
NOTE: Your local project version looks outdated - a different revision is available in the remote repository [43c77e50c9]
----------------------------------------------------
                                        ,--./,-.
        ___     __   __   __   ___     /,-._.--~'
  |\ | |__  __ /  ` /  \ |__) |__         }  {
  | \| |       \__, \__/ |  \ |___     \`-._,-`-,
                                        `._,._,'
  nf-core/proteomicslfq v1.0.0
----------------------------------------------------

Pipeline Release  : dev
Run Name          : nauseous_perlman
Max Resources     : 30 memory, 4 cpus, 10h time per job
Container         : singularity - nfcore/proteomicslfq:1.0.0
Output dir        : ./results
Launch dir        : /nfs/users/pr/qsample/test/proteomicslfq
Working dir       : /nfs/users/pr/qsample/test/proteomicslfq/work
Script dir        : /users/pr/qsample/.nextflow/assets/nf-core/proteomicslfq
User              : qsample
Config Profile    : singularity
Config Files      : /users/pr/qsample/.nextflow/assets/nf-core/proteomicslfq/nextflow.config, /nfs/users/pr/qsample/test/proteomicslfq/nextflow.config
----------------------------------------------------
[ca/d4dfbb] Submitted process > raw_file_conversion (1)
[bb/ba5b58] Submitted process > output_documentation
[be/5ec9e1] Submitted process > get_software_versions
Error executing process > 'output_documentation'

Caused by:
  Process `output_documentation` terminated with an error exit status (255)

Command executed:

  markdown_to_html.py output.md -o results_description.html

Command exit status:
  255

Command output:
  (empty)

Command wrapper:
  /usr/bin/id: error while loading shared libraries: libdl.so.2: failed to map segment from shared object: Cannot allocate memory
  /usr/bin/id: error while loading shared libraries: libdl.so.2: failed to map segment from shared object: Cannot allocate memory
  /usr/bin/id: error while loading shared libraries: libdl.so.2: failed to map segment from shared object: Cannot allocate memory
  /bin/ps: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory
  /bin/basename: missing operand
  Try '/bin/basename --help' for more information.
  id: error while loading shared libraries: libdl.so.2: failed to map segment from shared object: Cannot allocate memory
  /usr/bin/id: error while loading shared libraries: libdl.so.2: failed to map segment from shared object: Cannot allocate memory
  /etc/profile.d/vim.sh: line 4: [: : integer expression expected
  /usr/bin/lua: error while loading shared libraries: libm.so.6: failed to map segment from shared object: Cannot allocate memory
  /usr/bin/lua: error while loading shared libraries: libm.so.6: failed to map segment from shared object: Cannot allocate memory
  /usr/bin/lua: error while loading shared libraries: libm.so.6: failed to map segment from shared object: Cannot allocate memory
  /usr/bin/ps: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory
  /usr/bin/basename: missing operand
  Try '/usr/bin/basename --help' for more information.
  mkfifo: error while loading shared libraries: libdl.so.2: failed to map segment from shared object: Cannot allocate memory
  mkfifo: error while loading shared libraries: libdl.so.2: failed to map segment from shared object: Cannot allocate memory
  /usr/share/univage/soldierantcluster/spool/node-hp0304/job_scripts/64482065: line 305: /dev/shm/nxf.5lNQ9ZrURa/.command.out: No such file or directory
  /usr/share/univage/soldierantcluster/spool/node-hp0304/job_scripts/64482065: line 307: /dev/shm/nxf.5lNQ9ZrURa/.command.err: No such file or directory

Work dir:
  /nfs/users/pr/qsample/test/proteomicslfq/work/bb/ba5b58cd1a499ded63b9395ebfd8ae

Tip: you can replicate the issue by changing to the process work dir and entering the command `bash .command.run`
jpfeuffer commented 3 years ago

Hi! And thank you ;)

It is unfortunate, that it did not work out-of-the-box although we tried hard to design it like that. However, your specific problem sounds more like a complication between the containerization software and your user virtual memory space. It fails in a step which uses almost no memory at all and that is present in all nf-core pipelines. Did the test data or other data work? Things you could try are: