compomics / peptide-shaker-2.0-issue-tracker

Issue tracker for the beta release of PeptideShaker 2.0
Apache License 2.0
0 stars 0 forks source link

Error double insertion #53

Closed CarlaCristinaUranga closed 4 years ago

CarlaCristinaUranga commented 4 years ago

Hi I got this error with the new Peptide Shaker 2.0 beta version, perhaps it is also a known error? Thanks for your attention!

Screen Shot 2020-04-12 at 11 01 02 PM
CarlaCristinaUranga commented 4 years ago

Hi so the program passed the aforementioned hurdle now, but it seems to be getting stuck. Is this because of the large size of the database and .mgf file? My files are all 2 GB each, and I am using a large Uniprot database consisting of all bacteria sequences. I am definitely processing each file individually because of the large size. I am not sure how long would be considered normal for completion of the peptide inference step. Thank you!

Screen Shot 2020-04-13 at 8 18 30 AM
hbarsnes commented 4 years ago

Hi Carla,

I've been able to reproduce this error using Tide as well. I will look into it and get back to you. In the meantime you can perhaps leave Tide out when loading the search results?

Best regards, Harald

hbarsnes commented 4 years ago

PS: Note that I moved the issue to the PeptideShaker 2.0 issue tracker: https://github.com/compomics/peptide-shaker-2.0-issue-tracker.

hbarsnes commented 4 years ago

@CarlaCristinaUranga The Tide issue has now been resolved. I will release a new beta version as soon as I've sorted out some other issues as well.

hbarsnes commented 4 years ago

the program passed the aforementioned hurdle now, but it seems to be getting stuck. Is this because of the large size of the database and .mgf file? My files are all 2 GB each, and I am using a large Uniprot database consisting of all bacteria sequences. I am definitely processing each file individually because of the large size. I am not sure how long would be considered normal for completion of the peptide inference step.

If using big files this step can take some time to complete. The only way to make it go faster is to give PeptideShaker more memory. Please see: https://github.com/compomics/compomics-utilities/wiki/JavaTroubleShooting#memory-issues.

If the process does not complete, please send us the PeptideShaker log file.

CarlaCristinaUranga commented 4 years ago

Fantastic! Is there a way to troubleshoot protein inference issues? All of my files are getting stuck at this stage and not proceeding. How long should this step take?

Best, Carla

On Mon, Apr 13, 2020, 10:28 AM Harald Barsnes notifications@github.com wrote:

@CarlaCristinaUranga https://github.com/CarlaCristinaUranga The Tide issue has now been resolved. I will release a new beta version as soon as I've sorted out some other issues as well.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/compomics/peptide-shaker-2.0-issue-tracker/issues/53#issuecomment-613001457, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFK6HC5LEICKXV3Q4Y3SYY3RMNDTRANCNFSM4MHA6U4Q .

CarlaCristinaUranga commented 4 years ago

HibI am running it with 15 GB of memory already. I will send over the log file if it doesn't proceed in a couple of hours. Thanks!

On Mon, Apr 13, 2020, 10:31 AM Harald Barsnes notifications@github.com wrote:

the program passed the aforementioned hurdle now, but it seems to be getting stuck. Is this because of the large size of the database and .mgf file? My files are all 2 GB each, and I am using a large Uniprot database consisting of all bacteria sequences. I am definitely processing each file individually because of the large size. I am not sure how long would be considered normal for completion of the peptide inference step.

If using big files this step can take some time to complete. The only way to make it go faster is to give PeptideShaker more memory. Please see: https://github.com/compomics/compomics-utilities/wiki/JavaTroubleShooting#memory-issues .

If the process does not complete, please send us the PeptideShaker log file.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/compomics/peptide-shaker-2.0-issue-tracker/issues/53#issuecomment-613002799, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFK6HC653CSBO2XZ3BN2LWTRMND6HANCNFSM4MHA6U4Q .

hbarsnes commented 4 years ago

How long should this step take?

This highly depends on the size of your spectrum files and the protein sequence database used. And to some extent the search settings.

I am running it with 15 GB of memory already.

The new beta version should require less memory than the current release. But we have not yet tested this in detail. For the old version we often used 60+ GB of memory when processing large datasets. Note that it is possible to process large datasets with a lot less memory, it just will take longer. I'd recommend letting it run overnight.

hbarsnes commented 4 years ago

The issue has been closed given that the initial double insertion error was solved. A new beta version will be released as soon as possible.