Closed GoogleCodeExporter closed 9 years ago
Thank you for reporting your problem starting PeptideShaker.
First, if the startup failes a file called PeptideShaker.log should have been
created. You'll find this file in the PeptideShaker/conf folder. The content of
this file should tell us why the tool failed to start. If it's indeed a
standard memory issue (which I find rather strange given your setup) or
something else. So could you provide the content of this file?
PeptideShaker does not require 64 bit Java, but if you want to run it on bigger
files you'll need the 64 bit version in order to use more than 2GB of memory.
Could you upload the files you are trying to open in PeptideShaker? I'll then
verify if I'm able to make them work on our side.
Original comment by harald.b...@gmail.com
on 13 Jul 2011 at 9:44
Regarding the files you're trying to open in PeptideShaker:
I assume that the mgf file is the test_spectra.mgf file found in the
X!Tandem bin folder?
And that the result file is the one created running 'tandem input.xml'
with the default settings?
In that case, which FASTA file do you use? As the FASTA files provided in
the X!Tandem download are not true FASTA files but rather fasta.pro files
and can not be used as PeptideShaker input.
BTW, if you want to make sure that your X!Tandem output is compatible with
PeptideShaker I'd recommend using SearchGUI (http://searchgui.googlecode.com)
to set up the search.
Original comment by harald.b...@gmail.com
on 13 Jul 2011 at 10:03
I actually tried using on mgf file created from a 5500 Qtrap run on a
6 protein digest from Dionex using a derived uniprot database in
straight fasta format through your search gui. When I did get
peptide-shaker to load and pointed to this file, the program still
hung. I looked in task manager and a java process was slowly
consumming memmory all the way to ~1.6GB and just sitting there. I
let it go for about 45 minutes with no result. In the process, my
computer slowed to a crawl eventhough task manager showed little to no
cpu activity. does peptide-shaker try to load the fasta file into
memory? the xtandem result file is only 20 mb so I was not sure why
the java was loading so much memory. I want to try again tomorrow
using a different mgf and just the uniprot reviewed fasta file and
also on a different computer running just win XP (32 bit).
Original comment by mreed1...@gmail.com
on 13 Jul 2011 at 11:53
Sounds like you are having some weird memory issues. How big is your FASTA file?
Will be interesting to see if it works better for you on standard win xp.
In the meantime, I found and fixed some minor issues related to loading the
xtandem sample mgf file. And I am now able to load the default search result
from xtandem using this files and all of UniProt as the FASTA file without
any issues.
I'll release a new verion with these minor bug fixes shortly.
Original comment by harald.b...@gmail.com
on 14 Jul 2011 at 12:08
PeptideShaker version 0.9.1 has just been released.
Should fix some of the issues related to the use if the xtandem example file.
Does not solve the strange memory issues though...
See http://code.google.com/p/peptide-shaker/wiki/ReleaseNotes for the list of
changes.
Original comment by harald.b...@gmail.com
on 14 Jul 2011 at 12:43
The fasta file was a concatenated (real + decoy) file of about 900 MB.
Original comment by mreed1...@gmail.com
on 14 Jul 2011 at 11:06
[deleted comment]
Loading the fasta file might be the issue then. If you consider that human
uniprot sequences only need 26MB (target+decoy) it seems that you are searching
in a very very wide space. For the sake of searching time, search engine
compatibility and loaded memory I would really suggest to limit your fasta file
to the species needed.
I would thus suggest to use a simpler database as a first try (the desired
species complement of uniprot for instance). Then eventually, if you really
need a larger database we will adapt the code not to run into memory issues.
Finally, in order to identify decoy sequences properly we need the accessions
to contain "REVERSED". Please make sure that your fasta file abides by this
simple rule. This can be obtained by processing your target database using
searchGUI or dbtoolkit.
Original comment by mvau...@gmail.com
on 14 Jul 2011 at 12:04
Okay. I do try to target my databases to species needed but some of
samples have multiple species involved and underrepresented species so
I have to use for example unreviewed uniprot/trembl entries making for
some large fasta files. I use Proteinscape as my main data repository
and it requires the rnd prefix in order to recognize decoy sequences
so that is what i normally use. I sometimes even end up using NCBInr
because we don't know exactly is in the sample and it is a fishing
expedition and I just accept that I can't trust some of the low
scoring id's. Sorry, I work in a core facility so we see a wide
variety of samples.
On the Server 2008 R2 (64bit) system, I am attaching a screenshot of
the error message. and the log file contains the following:
<ERROR>
Could not create the Java virtual machine.
The command line executed:
C:\Program Files (x86)\Java\jre6\bin\java -Xms128M -Xmx1600M -cp
"D:\PeptideShaker-0.9\PeptideShaker-0.9\PeptideShaker-0.9.jar"
eu.isas.peptideshaker.gui.PeptideShakerGUI
</ERROR>
<ERROR>
Could not create the Java virtual machine.
The command line executed:
C:\Program Files (x86)\Java\jre6\bin\java -Xms128M -Xmx1600M -cp
"D:\PeptideShaker-0.9\PeptideShaker-0.9\PeptideShaker-0.9.jar"
eu.isas.peptideshaker.gui.PeptideShakerGUI
</ERROR>
If I set -Xmx to 1500M or below, it runs fine.
Original comment by mreed1...@gmail.com
on 14 Jul 2011 at 12:28
Ok, so we have two related issues here: 1) not being able to set the memory
limit
high enough (even though there's plenty of memory) and 2) the huge FASTA files.
The use if FASTA files this big will require some changes to how we handle
these
files, as the files we used have never been of this size. We'll see what we can
come up with and get back to you on that. Should be possible to fix.
As for the setting of the upper memory limit when starting the tool, this seems
to be a general issue with your Java installation and not with PeptideShaker
itself. If you have 32 GB of memory available you should be able to use more
than 1500 MB of them...
I'd recommend trying to install the 64 bit Java version and see if that helps.
After installing it try running the command above from the command line, but
make sure that you are referring to the new Java 64 bit version, as Windows
sometimes gets confused when both 32 and 64 bit Java is installed. See
http://code.google.com/p/peptide-shaker/#Troubleshooting (Memory Issues II -
Advanced).
It might also be that your machine has an upper limit for how much virtual
memory Java can use. This can be changed though. See:
http://code.google.com/p/pride-converter/#Troubleshooting (Memory Issues IV)
for help.
Let us know if any if this helps at all.
BTW, I cannot find the screenshot of the error message you mention...
Original comment by harald.b...@gmail.com
on 14 Jul 2011 at 9:18
As mentioned we are encountering several issues which are not necessarily
Peptide-Shaker issue.
1) It is not possible to create a JVM with more than 1600MB of memory on a 32GB
RAM machine: you should check your java installation.
2) The FASTA file is huge: on our side we will make it possible to work with
such files in the following releases. On your side, we would suggest to
consider reducing the search space: (1) MS/MS searches with such databases take
ages, (2) the probability to find degenerated peptides will increase with the
database size and the protein inference will quickly become extremely complex,
(3) when analyzing a sample you will find isoforms of your proteins across all
species: you will then need to know what kind of sample it was anyway. In
conclusion, we will do our best but still strongly recommend to make realistic
searches; and always start simple :)
3) the decoy tag is not compatible with your databases: I will correct that
very soon. However we proofed the performance of Peptide-Shaker only with
concatenated forward reversed sequences: if you use another kind of database it
is at your own risks!
Original comment by mvau...@gmail.com
on 15 Jul 2011 at 9:26
Original comment by harald.b...@gmail.com
on 9 Oct 2011 at 7:29
PeptideShaker v0.10.0 has just been released. Please let us know if this solves
your issues or not.
Original comment by harald.b...@gmail.com
on 19 Oct 2011 at 3:29
Original comment by harald.b...@gmail.com
on 19 Oct 2011 at 3:44
Original issue reported on code.google.com by
mreed1...@gmail.com
on 13 Jul 2011 at 5:50