nf-core / proteomicslfq

Proteomics label-free quantification (LFQ) analysis pipeline
https://nf-co.re/proteomicslfq
MIT License
33 stars 19 forks source link

manifest unknown #97

Closed rolivella closed 3 years ago

rolivella commented 4 years ago

Hi! I'm trying to pull the singularity image but I get this error:

singularity pull  --name nfcore-proteomicslfq-1.0.0.img docker://nfcore/proteomicslfq:1.0.0

WARNING: pull for Docker Hub is not guaranteed to produce the
WARNING: same image on repeated pull. Use Singularity Registry
WARNING: (shub://) to pull exactly equivalent images.
Docker image path: index.docker.io/nfcore/proteomicslfq:1.0.0
ERROR MANIFEST_UNKNOWN: manifest unknown
Cleaning up...
ERROR: pulling container failed!

Any idea?

Thanks!

jpfeuffer commented 4 years ago

Hi!

I think since the PR for 1.0.0 is not yet merged to master, there is no official 1.0.0 tag yet. You will need to resort to :dev for now. I am still waiting on a last review from the core team.

rolivella commented 4 years ago

@jpfeuffer where do I have to modify the tag from 1.0.0 to dev? I mean, on which config file? Thanks

Zethson commented 4 years ago

@rolivella The 1.0.0 release was just approved and will happen very very soon.

Then you will not have to worry about this :)

We can update you when it is ready.

Zethson commented 4 years ago

But to answer your question briefly without sufficient detail:

https://github.com/nf-core/proteomicslfq/blob/dev/nextflow.config#L135

This is where the Docker and therefore also the Singularity container (which uses the Docker container) is defined.

nf-core uses a dev specifier for non-release versions. This is what @jpfeuffer was hinting at.

Zethson commented 4 years ago

@rolivella please try again.

https://hub.docker.com/layers/nfcore/proteomicslfq/1.0.0/images/sha256-5fc60f80febe6fb0ce91fcc5198998105ba41d3c3a7df0332ffd441ee8ce98bd?context=explore

It should be available now!

rolivella commented 4 years ago

@Zethson now it works, thanks!

rolivella commented 3 years ago

@Zethson while processing a HeLa QC file (827M) I got this error:

N E X T F L O W  ~  version 20.07.1
Launching `nf-core/proteomicslfq` [elegant_fermat] - revision: eb5f7a004c [dev]
NOTE: Your local project version looks outdated - a different revision is available in the remote repository [43c77e50c9]
----------------------------------------------------
                                        ,--./,-.
        ___     __   __   __   ___     /,-._.--~'
  |\ | |__  __ /  ` /  \ |__) |__         }  {
  | \| |       \__, \__/ |  \ |___     \`-._,-`-,
                                        `._,._,'
  nf-core/proteomicslfq v1.0.0
----------------------------------------------------

Pipeline Release  : dev
Run Name          : elegant_fermat
Max Resources     : 30 memory, 4 cpus, 10h time per job
Container         : singularity - nfcore/proteomicslfq:1.0.0
Output dir        : ./results
Launch dir        : /nfs/users/pr/qsample/test/proteomicslfq
Working dir       : /nfs/users/pr/qsample/test/proteomicslfq/work
Script dir        : /users/pr/qsample/.nextflow/assets/nf-core/proteomicslfq
User              : qsample
Config Profile    : singularity
Config Files      : /users/pr/qsample/.nextflow/assets/nf-core/proteomicslfq/nextflow.config, /nfs/users/pr/qsample/test/proteomicslfq/nextflow.config
----------------------------------------------------
[ef/ac5298] Submitted process > raw_file_conversion (1)
[43/bd3f33] Submitted process > output_documentation
[b5/9a2514] Submitted process > get_software_versions
Error executing process > 'raw_file_conversion (1)'

Caused by:
  Process `raw_file_conversion (1)` terminated with an error exit status (255)

Command executed:

  ThermoRawFileParser.sh -i=190219_Q_QC02_01_01_100ng.raw -f=2 -o=./ > 190219_Q_QC02_01_01_100ng.raw_conversion.log

Command exit status:
  255

Command output:
  (empty)

Command wrapper:
  /usr/bin/id: error while loading shared libraries: libdl.so.2: failed to map segment from shared object: Cannot allocate memory
  /usr/bin/id: error while loading shared libraries: libdl.so.2: failed to map segment from shared object: Cannot allocate memory
  /usr/bin/id: error while loading shared libraries: libdl.so.2: failed to map segment from shared object: Cannot allocate memory
  /bin/ps: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory
  /bin/basename: missing operand
  Try '/bin/basename --help' for more information.
  id: error while loading shared libraries: libdl.so.2: failed to map segment from shared object: Cannot allocate memory
  /usr/bin/id: error while loading shared libraries: libdl.so.2: failed to map segment from shared object: Cannot allocate memory
  /etc/profile.d/vim.sh: line 4: [: : integer expression expected
  /usr/bin/lua: error while loading shared libraries: libm.so.6: failed to map segment from shared object: Cannot allocate memory
  /usr/bin/lua: error while loading shared libraries: libm.so.6: failed to map segment from shared object: Cannot allocate memory
  /usr/bin/lua: error while loading shared libraries: libm.so.6: failed to map segment from shared object: Cannot allocate memory
  /usr/bin/ps: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory
  /usr/bin/basename: missing operand
  Try '/usr/bin/basename --help' for more information.
  mkfifo: error while loading shared libraries: libdl.so.2: failed to map segment from shared object: Cannot allocate memory
  mkfifo: error while loading shared libraries: libdl.so.2: failed to map segment from shared object: Cannot allocate memory

Work dir:
  /nfs/users/pr/qsample/test/proteomicslfq/work/ef/ac52984b68e9d89ab65e736487f2b3

Tip: when you have fixed the problem you can continue the execution adding the option `-resume` to the run command line

Execution cancelled -- Finishing pending tasks before exit
-[nf-core/proteomicslfq] Pipeline completed with errors-

I presume that I've to allocate more memory so I changed the base.config file in this way:

params {
  // Defaults only, expecting to be overwritten
  max_memory = 30
  max_cpus = 4
  max_time = 10.h
}

But I still got this error...what can I do?

Thanks!

jpfeuffer commented 3 years ago

Yes, I think you need to increase the allocated memory. Some things:

rolivella commented 3 years ago

Thanks a lot @jpfeuffer , now I see how it works!

rolivella commented 3 years ago

Sorry to bother you again, but I get another execution error:

N E X T F L O W  ~  version 20.07.1
Launching `nf-core/proteomicslfq` [nauseous_lamarck] - revision: eb5f7a004c [dev]
NOTE: Your local project version looks outdated - a different revision is available in the remote repository [43c77e50c9]
----------------------------------------------------
                                        ,--./,-.
        ___     __   __   __   ___     /,-._.--~'
  |\ | |__  __ /  ` /  \ |__) |__         }  {
  | \| |       \__, \__/ |  \ |___     \`-._,-`-,
                                        `._,._,'
  nf-core/proteomicslfq v1.0.0
----------------------------------------------------

Pipeline Release  : dev
Run Name          : nauseous_lamarck
Max Resources     : 30 GB memory, 4 cpus, 10h time per job
Container         : singularity - nfcore/proteomicslfq:1.0.0
Output dir        : ./results
Launch dir        : /nfs/users/pr/qsample/test/proteomicslfq
Working dir       : /nfs/users/pr/qsample/test/proteomicslfq/work
Script dir        : /users/pr/qsample/.nextflow/assets/nf-core/proteomicslfq
User              : qsample
Config Profile    : singularity
Config Files      : /users/pr/qsample/.nextflow/assets/nf-core/proteomicslfq/nextflow.config, /nfs/users/pr/qsample/test/proteomicslfq/nextflow.config
----------------------------------------------------
[a8/77d645] Submitted process > raw_file_conversion (1)
[9f/74b618] Submitted process > get_software_versions
[5a/c3431d] Submitted process > output_documentation
[4f/3f23cb] Submitted process > search_engine_comet (1)
[2a/d455a5] Submitted process > index_peptides (1)
[4e/1feb5a] Submitted process > extract_percolator_features (1)
[93/31b0e8] Submitted process > percolator (1)
[37/b71c29] Submitted process > idscoreswitcher_to_qval (1)
[ba/d87053] Submitted process > idfilter (1)
[d8/2944e1] Submitted process > proteomicslfq (1)
Error executing process > 'proteomicslfq (1)'

Caused by:
  Process `proteomicslfq (1)` terminated with an error exit status (3)

Command executed:

  ProteomicsLFQ -in 190219_Q_QC02_01_01_100ng.mzML \
                -ids 190219_Q_QC02_01_01_100ng_comet_idx_feat_perc_switched_filter.idXML \
                -design test.tsv \
                -fasta shotgun_hela.fasta \
                -protein_inference aggregation \
                -quantification_method feature_intensity \
                -targeted_only true \
                -mass_recalibration false \
                -transfer_ids false \
                -protein_quantification unique_peptides \
                -out out.mzTab \
                -threads 4 \
                -out_msstats out.csv \
                -out_cxml out.consensusXML \
                -proteinFDR 0.05 \
                -debug 0 \
                > proteomicslfq.log

Command exit status:
  3

Command output:
  (empty)

Work dir:
  /nfs/users/pr/qsample/test/proteomicslfq/work/d8/2944e18d344b13ca3951b593247cf0

Tip: you can try to figure out what's wrong by changing to the process work dir and showing the script file named `.command.sh`

Execution cancelled -- Finishing pending tasks before exit
-[nf-core/proteomicslfq] Pipeline completed with errors-

I checked the work dir but all .log, .err, etc. are empty. Any idea?

Thanks!

jpfeuffer commented 3 years ago

Hmm all empty sounds strange. If the exit code is from our tool, it means INPUT_FILE_CORRUPT. Maybe both facts plus the usage of a shared file system hints at problems with the NFS? At least our NFS sometimes has some hiccups. Did you try to run the command again? You can use "-resume", then it just resumes the last failed process but in this case it might be better to generate the intermediate files again?

By the way, did you intend to only pass one file? In case you tried to use wildcards, you need to wrap them in quotes '--input /folder/foo*.mzML' such that they do not get pre-expanded by your shell.

rolivella commented 3 years ago

@jpfeuffer there's an error in the TSV file. At the ProteomicsLFQ process work folder there's this message:

Error: Unable to read file (Error: Missing column header: Fraction in: test.tsv)

In my test.tsv I've only:

190219_Q_QC02_01_01_100ng.raw

Which is the file I want to process, and I'm not using wildcards. What header do I have to add? I cannot find this information in the documentation: https://nf-co.re/proteomicslfq/1.0.0/parameters#main-parameters-spectra-files

Thank you.

jpfeuffer commented 3 years ago

Hi! Ok, in case you only have one file, you don't need wildcards. Yes, the documentation for the experimental design parameter was already updated in the development branch. You always need a full experimental design in this format.

jpfeuffer commented 3 years ago
Fraction_Group Fraction Spectra_Filepath Label MSstats_Condition MSstats_BioReplicate
1 1 190219_Q_QC02_01_01_100ng.mzML 1 QC 1
jpfeuffer commented 3 years ago

Remember MSstats probably should be disabled, since it does not make sense for one file.

rolivella commented 3 years ago

@jpfeuffer it works, thanks! Just a last question. Where I can find the total numbers of identified peptides and proteins? I've seen the list of ids at results/proteomics_lfq/out.csv but does the pipeline give the totals? There's also this file at msstats_results.csv but gave me a wrong result because it only found:

# of Protein                 3
# of Peptides/Protein     1-15 

My nextflow command is:

nextflow run nf-core/proteomicslfq -with-trace --skip_post_msstats -bg --expdesign test.tsv -profile singularity --input 'incoming/*.raw' --database '/users/pr/qsample/qcmass/blastdb/shotgun_hela.fasta' -resume -r dev > proteomicslfq.log

I disabled msstats as you mentioned in a previous comment that it does not make sense for one file.

jpfeuffer commented 3 years ago

I don't think there is a prominent number somewhere. Your best option is probably to enable the QC report, if you do not want to parse the mzTab or csv output.

ypriverol commented 3 years ago

Can we close this issue @rolivella ?

rolivella commented 3 years ago

@ypriverol yes, thanks.