Closed GoogleCodeExporter closed 9 years ago
Using import filters should not be necessary and would not have that big an
effect on the memory usage anyway. And 16 GB should be enough to load your
data. If PeptideShaker gets close to running out of memory it should simply
start using the disk more. Which will be slower, but should work.
At which stage is PeptideShaker at when it runs out of memory? And could you
send me the PeptideShaker log file? (The issue attachment storage quota has
been exceeded. Trying to get it extended. But for now just send the log file to
me directly.)
Original comment by harald.b...@gmail.com
on 9 Mar 2014 at 11:50
I sent the log file to your gmail.
Please have a check.
Thanks.
Original comment by andy...@gmail.com
on 9 Mar 2014 at 11:56
Ok, got the file. So you are running out of memory in the post-processing of
the search results. This should not happen. How much memory do you have
available on the machine in total?
I would not recommend setting the amount available for PeptideShaker equal to
the total amount of memory on the machine. So if you have 16 GB of memory on
the machine I'd recommend giving maximum 14 to PeptideShaker. This should
reduce the chance of PeptideShaker running out of memory as it will become
aware of the memory issues sooner and start using the disk more.
How big are the mgf files you are searching with? And are they peak picked? And
what is the database used?
Original comment by harald.b...@gmail.com
on 9 Mar 2014 at 2:04
I actually have 16 GB of memory on the machine.
I have searched 22 mgf files, which are 80 MB on the average. They have
already been peak picked, and I used the Mouse Uniprot database, about 58
MB containing decoy entries.
Original comment by andy...@gmail.com
on 9 Mar 2014 at 2:19
I have gave the 14 GB to peptideshaker, it still ran out of the memory.
Attached is the log file.
Original comment by andy...@gmail.com
on 9 Mar 2014 at 4:20
Just wanted to confirm that I've been able to reproduce the memory issues. I
will do some tests and see if I'm able to track down the problem and hopefully
fix it.
BTW, how did it go with loading just the X!Tandem results?
Original comment by harald.b...@gmail.com
on 14 Mar 2014 at 1:00
I have loaded only Xtandem and Mascot results, and I want to abandon OMSSA,
whose searching is time-consuming and results is hard to read for
peptideshaker.
Anyway, I hope you can fix this bug.
I much appreciate that you team developed such robust and easy-to-use
softwares, SearchGUI and PeptideShaker.
Best,
Chuan-Qi
Original comment by andy...@gmail.com
on 14 Mar 2014 at 1:54
So just to be clear, you are able to load the X!Tandem and Mascot results in
PeptideShaker without memory problems then?
Original comment by harald.b...@gmail.com
on 14 Mar 2014 at 2:15
Yes.
Original comment by andy...@gmail.com
on 14 Mar 2014 at 2:25
With the release of PeptideShaker v0.27.0 the memory problems should be solved.
Loading data when all the memory is used up can be time consuming though. We
are however trying to speed this up.
But I can confirm that with the new version I am able to load all your OMSSA
files when giving PeptideShaker 15 GB of memory, which was not possible in the
previous version.
Note also that in the new SearchGUI and PeptideShaker version we now also
support MS-GF+, so you could try replacing OMSSA with MS-GF+ as well.
Original comment by harald.b...@gmail.com
on 26 Mar 2014 at 8:14
Original issue reported on code.google.com by
andy...@gmail.com
on 9 Mar 2014 at 11:07