vdemichev / DiaNN

DIA-NN - a universal automated software suite for DIA proteomics data analysis.
Other
263 stars 53 forks source link

Issue while processing Thermo RAW for phospho analysis #978

Open ChintanBhavsar93 opened 5 months ago

ChintanBhavsar93 commented 5 months ago

Hi there,

Relatively new here!

I encountered this error while I was processing .raw files acquired on thermo QE HF for phosphoproteomics. Note: DIA-NN works perfectly fine for normal proteomics for us; however, phospho processing gives us this error. Any leads highly appreciated.

diann.exe --f "C:\Users\uqcbhavs\Desktop\MS\20240319_tcbhavsa_HF_TRI_MS_Run_47\20240319_tcbhavsa_HF_TRI_MS_Run_47_GO569_Control_phos_S_Sample.raw " --f "C:\Users\uqcbhavs\Desktop\MS\20240319_tcbhavsa_HF_TRI_MS_Run_47\20240319_tcbhavsa_HF_TRI_MS_Run_47_GO569_Treated_phos_S_Sample.raw " --lib "" --threads 12 --verbose 1 --out "C:\Users\uqcbhavs\Desktop\MS\Phospho_3\report.tsv" --qvalue 0.01 --matrices --out-lib "C:\DIA-NN\1.8.1\report-lib.tsv" --gen-spec-lib --predictor --fasta "C:\Users\uqcbhavs\Desktop\FASTA\UP000005640_9606_additional.fasta\UP000005640_9606_additional.fasta" --fasta "C:\Users\uqcbhavs\Desktop\FASTA\UP000005640_9606.fasta\UP000005640_9606.fasta" --fasta-search --min-fr-mz 100 --max-fr-mz 1800 --met-excision --cut K,R --missed-cleavages 1 --min-pep-len 7 --max-pep-len 30 --min-pr-mz 300 --max-pr-mz 1800 --min-pr-charge 2 --max-pr-charge 4 --unimod4 --var-mods 2 --var-mod UniMod:21,79.966331,STY --monitor-mod UniMod:21 --no-prot-inf --reanalyse --relaxed-prot-inf --smart-profiling --peak-center --no-ifs-removal DIA-NN 1.8.1 (Data-Independent Acquisition by Neural Networks) Compiled on Apr 14 2022 15:31:19 Current date and time: Mon Mar 25 14:18:47 2024 CPU: GenuineIntel 12th Gen Intel(R) Core(TM) i5-12500 SIMD instructions: AVX AVX2 FMA SSE4.1 SSE4.2 Logical CPU cores: 12 Thread number set to 12 Output will be filtered at 0.01 FDR Precursor/protein x samples expression level matrices will be saved along with the main report A spectral library will be generated Deep learning will be used to generate a new in silico spectral library from peptides provided Library-free search enabled Min fragment m/z set to 100 Max fragment m/z set to 1800 N-terminal methionine excision enabled In silico digest will involve cuts at K,R Maximum number of missed cleavages set to 1 Min peptide length set to 7 Max peptide length set to 30 Min precursor m/z set to 300 Max precursor m/z set to 1800 Min precursor charge set to 2 Max precursor charge set to 4 Cysteine carbamidomethylation enabled as a fixed modification Maximum number of variable modifications set to 2 Modification UniMod:21 with mass delta 79.9663 at STY will be considered as variable Protein inference will not be performed A spectral library will be created from the DIA runs and used to reanalyse them; .quant files will only be saved to disk during the first step Highly heuristic protein grouping will be used, to reduce the number of protein groups obtained; this mode is recommended for benchmarking protein ID numbers; use with caution for anything else When generating a spectral library, in silico predicted spectra will be retained if deemed more reliable than experimental ones Fixed-width center of each elution peak will be used for quantification Interference removal from fragment elution curves disabled DIA-NN will optimise the mass accuracy automatically using the first run in the experiment. This is useful primarily for quick initial analyses, when it is not yet known which mass accuracy setting works best for a particular acquisition scheme. Exclusion of fragments shared between heavy and light peptides from quantification is not supported in FASTA digest mode - disabled; to enable, generate an in silico predicted spectral library and analyse with this library The following variable modifications will be scored: UniMod:21

2 files will be processed [0:00] Loading FASTA C:\Users\uqcbhavs\Desktop\FASTA\UP000005640_9606_additional.fasta\UP000005640_9606_additional.fasta [0:06] Loading FASTA C:\Users\uqcbhavs\Desktop\FASTA\UP000005640_9606.fasta\UP000005640_9606.fasta [0:13] Processing FASTA [2:32] Assembling elution groups [6:49] 32749237 precursors generated [6:54] Gene names missing for some isoforms [6:54] Library contains 82078 proteins, and 20541 genes [7:08] Encoding peptides for spectra and RTs prediction [12:54] Predicting spectra and IMs Libtorch error: The following operation failed in the TorchScript interpreter. Traceback of TorchScript, serialized code (most recent call last): File "code/torch/_torch_mangle_7.py", line 50, in forward hidden = torch.zeros([_9, _10, hidden_size], dtype=None, layout=None, device=torch.device(device0)) rnn = self.rnn output, hidden0, = (rnn).forward0(conv1, hidden, )


    _11 = torch.slice(torch.slice(output), 1, None, 1)
    output0 = torch.slice(_11, 2)
  File "code/__torch__/torch/nn/modules/rnn/___torch_mangle_4.py", line 42, in forward__0
    _flat_weights = self._flat_weights
    training = self.training
    _3, _4 = torch.gru(input, hx0, _flat_weights, True, 1, 0.20000000000000001, training, True, True)
             ~~~~~~~~~ <--- HERE
    _5 = (_3, (self).permute_hidden(_4, None, ))
    return _5

Traceback of TorchScript, original code (most recent call last):
  File "<ipython-input-213-c8b72e7c9584>", line 31, in forward

        hidden = torch.zeros(self.directions * self.layers,input.size(0),self.hidden_size,device=self.device)
        output, hidden = self.rnn(conv, hidden)
                         ~~~~~~~~ <--- HERE
        output = output[:,:1,:]
        prepare = torch.rrelu(self.prepare(output))
  File "C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\rnn.py", line 849, in forward__0
        self.check_forward_args(input, hx, batch_sizes)
        if batch_sizes is None:
            result = _VF.gru(input, hx, self._flat_weights, self.bias, self.num_layers,
                     ~~~~~~~ <--- HERE
                             self.dropout, self.training, self.bidirectional, self.batch_first)
        else:
RuntimeError: [enforce fail at ..\..\c10\core\CPUAllocator.cpp:76] data. DefaultCPUAllocator: not enough memory: you tried to allocate 3145728 bytes.

Libtorch error: 
The following operation failed in the TorchScript interpreter.
Traceback of TorchScript, serialized code (most recent call last):
  File "code/__torch__/___torch_mangle_2.py", line 51, in forward
    hidden = torch.zeros([_9, _10, hidden_size], dtype=None, layout=None, device=torch.device(device0))
    rnn = self.rnn
    output, hidden0, = (rnn).forward__0(conv1, hidden, )
                        ~~~~~~~~~~~~~~~ <--- HERE
    prepare = self.prepare
    prepare0 = torch.rrelu((prepare).forward(output, ))
  File "code/__torch__/torch/nn/modules/rnn.py", line 50, in forward__0
    _flat_weights = self._flat_weights
    training = self.training
    _3, _4 = torch.gru(input, hx0, _flat_weights, True, 2, 0.29999999999999999, training, True, True)
             ~~~~~~~~~ <--- HERE
    _5 = (_3, (self).permute_hidden(_4, None, ))
    return _5

Traceback of TorchScript, original code (most recent call last):
  File "<ipython-input-44-0dd9b29dfda4>", line 32, in forward

        hidden = torch.zeros(self.directions * self.layers, input.size(0), self.hidden_size, device = self.device)
        output, hidden = self.rnn(conv, hidden)
                         ~~~~~~~~ <--- HERE
        prepare = torch.rrelu(self.prepare(output))
        out = self.out(prepare)
  File "C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\rnn.py", line 849, in forward__0
        self.check_forward_args(input, hx, batch_sizes)
        if batch_sizes is None:
            result = _VF.gru(input, hx, self._flat_weights, self.bias, self.num_layers,
                     ~~~~~~~ <--- HERE
                             self.dropout, self.training, self.bidirectional, self.batch_first)
        else:
RuntimeError: [enforce fail at ..\..\c10\core\CPUAllocator.cpp:76] data. DefaultCPUAllocator: not enough memory: you tried to allocate 6291456 bytes.

Libtorch error: 
The following operation failed in the TorchScript interpreter.
Traceback of TorchScript, serialized code (most recent call last):
  File "code/__torch__/___torch_mangle_2.py", line 51, in forward
    hidden = torch.zeros([_9, _10, hidden_size], dtype=None, layout=None, device=torch.device(device0))
    rnn = self.rnn
    output, hidden0, = (rnn).forward__0(conv1, hidden, )
                        ~~~~~~~~~~~~~~~ <--- HERE
    prepare = self.prepare
    prepare0 = torch.rrelu((prepare).forward(output, ))
  File "code/__torch__/torch/nn/modules/rnn.py", line 50, in forward__0
    _flat_weights = self._flat_weights
    training = self.training
    _3, _4 = torch.gru(input, hx0, _flat_weights, True, 2, 0.29999999999999999, training, True, True)
             ~~~~~~~~~ <--- HERE
    _5 = (_3, (self).permute_hidden(_4, None, ))
    return _5

Traceback of TorchScript, original code (most recent call last):
  File "<ipython-input-44-0dd9b29dfda4>", line 32, in forward

        hidden = torch.zeros(self.directions * self.layers, input.size(0), self.hidden_size, device = self.device)
        output, hidden = self.rnn(conv, hidden)
                         ~~~~~~~~ <--- HERE
        prepare = torch.rrelu(self.prepare(output))
        out = self.out(prepare)
  File "C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\rnn.py", line 849, in forward__0
        self.check_forward_args(input, hx, batch_sizes)
        if batch_sizes is None:
            result = _VF.gru(input, hx, self._flat_weights, self.bias, self.num_layers,
                     ~~~~~~~ <--- HERE
                             self.dropout, self.training, self.bidirectional, self.batch_first)
        else:
RuntimeError: [enforce fail at ..\..\c10\core\CPUAllocator.cpp:76] data. DefaultCPUAllocator: not enough memory: you tried to allocate 18874368 bytes.

Libtorch error: 
The following operation failed in the TorchScript interpreter.
Traceback of TorchScript, serialized code (most recent call last):
  File "code/__torch__/___torch_mangle_2.py", line 51, in forward
    hidden = torch.zeros([_9, _10, hidden_size], dtype=None, layout=None, device=torch.device(device0))
    rnn = self.rnn
    output, hidden0, = (rnn).forward__0(conv1, hidden, )
                        ~~~~~~~~~~~~~~~ <--- HERE
    prepare = self.prepare
    prepare0 = torch.rrelu((prepare).forward(output, ))
  File "code/__torch__/torch/nn/modules/rnn.py", line 50, in forward__0
    _flat_weights = self._flat_weights
    training = self.training
    _3, _4 = torch.gru(input, hx0, _flat_weights, True, 2, 0.29999999999999999, training, True, True)
             ~~~~~~~~~ <--- HERE
    _5 = (_3, (self).permute_hidden(_4, None, ))
    return _5

Traceback of TorchScript, original code (most recent call last):
  File "<ipython-input-44-0dd9b29dfda4>", line 32, in forward

        hidden = torch.zeros(self.directions * self.layers, input.size(0), self.hidden_size, device = self.device)
        output, hidden = self.rnn(conv, hidden)
                         ~~~~~~~~ <--- HERE
        prepare = torch.rrelu(self.prepare(output))
        out = self.out(prepare)
  File "C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\rnn.py", line 849, in forward__0
        self.check_forward_args(input, hx, batch_sizes)
        if batch_sizes is None:
            result = _VF.gru(input, hx, self._flat_weights, self.bias, self.num_layers,
                     ~~~~~~~ <--- HERE
                             self.dropout, self.training, self.bidirectional, self.batch_first)
        else:
RuntimeError: [enforce fail at ..\..\c10\core\CPUAllocator.cpp:76] data. DefaultCPUAllocator: not enough memory: you tried to allocate 6291456 bytes.

Libtorch error: 
The following operation failed in the TorchScript interpreter.
Traceback of TorchScript, serialized code (most recent call last):
  File "code/__torch__/___torch_mangle_2.py", line 51, in forward
    hidden = torch.zeros([_9, _10, hidden_size], dtype=None, layout=None, device=torch.device(device0))
    rnn = self.rnn
    output, hidden0, = (rnn).forward__0(conv1, hidden, )
                        ~~~~~~~~~~~~~~~ <--- HERE
    prepare = self.prepare
    prepare0 = torch.rrelu((prepare).forward(output, ))
  File "code/__torch__/torch/nn/modules/rnn.py", line 50, in forward__0
    _flat_weights = self._flat_weights
    training = self.training
    _3, _4 = torch.gru(input, hx0, _flat_weights, True, 2, 0.29999999999999999, training, True, True)
             ~~~~~~~~~ <--- HERE
    _5 = (_3, (self).permute_hidden(_4, None, ))
    return _5

Traceback of TorchScript, original code (most recent call last):
  File "<ipython-input-44-0dd9b29dfda4>", line 32, in forward

        hidden = torch.zeros(self.directions * self.layers, input.size(0), self.hidden_size, device = self.device)
        output, hidden = self.rnn(conv, hidden)
                         ~~~~~~~~ <--- HERE
        prepare = torch.rrelu(self.prepare(output))
        out = self.out(prepare)
  File "C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\rnn.py", line 849, in forward__0
        self.check_forward_args(input, hx, batch_sizes)
        if batch_sizes is None:
            result = _VF.gru(input, hx, self._flat_weights, self.bias, self.num_layers,
                     ~~~~~~~ <--- HERE
                             self.dropout, self.training, self.bidirectional, self.batch_first)
        else:
RuntimeError: [enforce fail at ..\..\c10\core\CPUAllocator.cpp:76] data. DefaultCPUAllocator: not enough memory: you tried to allocate 18874368 bytes.

Libtorch error: 
The following operation failed in the TorchScript interpreter.
Traceback of TorchScript, serialized code (most recent call last):
  File "code/__torch__/___torch_mangle_2.py", line 51, in forward
    hidden = torch.zeros([_9, _10, hidden_size], dtype=None, layout=None, device=torch.device(device0))
    rnn = self.rnn
    output, hidden0, = (rnn).forward__0(conv1, hidden, )
                        ~~~~~~~~~~~~~~~ <--- HERE
    prepare = self.prepare
    prepare0 = torch.rrelu((prepare).forward(output, ))
  File "code/__torch__/torch/nn/modules/rnn.py", line 50, in forward__0
    _flat_weights = self._flat_weights
    training = self.training
    _3, _4 = torch.gru(input, hx0, _flat_weights, True, 2, 0.29999999999999999, training, True, True)
             ~~~~~~~~~ <--- HERE
    _5 = (_3, (self).permute_hidden(_4, None, ))
    return _5

Traceback of TorchScript, original code (most recent call last):
  File "<ipython-input-44-0dd9b29dfda4>", line 32, in forward

        hidden = torch.zeros(self.directions * self.layers, input.size(0), self.hidden_size, device = self.device)
        output, hidden = self.rnn(conv, hidden)
                         ~~~~~~~~ <--- HERE
        prepare = torch.rrelu(self.prepare(output))
        out = self.out(prepare)
  File "C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\rnn.py", line 849, in forward__0
        self.check_forward_args(input, hx, batch_sizes)
        if batch_sizes is None:
            result = _VF.gru(input, hx, self._flat_weights, self.bias, self.num_layers,
                     ~~~~~~~ <--- HERE
                             self.dropout, self.training, self.bidirectional, self.batch_first)
        else:
RuntimeError: [enforce fail at ..\..\c10\core\CPUAllocator.cpp:76] data. DefaultCPUAllocator: not enough memory: you tried to allocate 6291456 bytes.

DIA-NN exited
DIA-NN-plotter.exe "C:\Users\uqcbhavs\Desktop\MS\Phospho_3\report.stats.tsv" "C:\Users\uqcbhavs\Desktop\MS\Phospho_3\report.tsv" "C:\Users\uqcbhavs\Desktop\MS\Phospho_3\report.pdf"
PDF report will be generated in the background

![Screenshot 2024-03-25 115614](https://github.com/vdemichev/DiaNN/assets/101312379/acf631d9-497c-444b-beb4-f78146430478)
vdemichev commented 5 months ago

This is not related to Thermo. Most likely not enough RAM in the system to generate a predicted library for 32 million precursors. Also, please try 1.8.2 beta 27 for this, has much improved predictor.