Closed lgatto closed 5 years ago
@lgatto have you tried the conda package? conda install -c conda-forge -c bioconda moff
Yes, I did, but the conda version seems to be outdated (it doesn't have the latest features I was interested in): the moff.py
from conda is substantially smaller hat the one on github and doesn't have the code that need to be modified, as documented in the README file.
Hi @lgatto ,
it was my mistake when I wrote the documentation.
The right command is :
python moff_all.py --config_file absense_peak_data/configuration_iRT.ini
Thanks to report this, I will updated the redame asap. Let me know if it does not work.
The conda version has been recently updated to version 2.0.1.
Thanks, will try it out tomorrow and report back.
Some progress... but not quite successful yet.
Running
python moff_all.py --config_file absence_peak_data/configuration_iRT.ini
(note that you have a typo above absens[c]e_peak_data above)
with
$cat absence_peak_data/configuration_iRT.ini
[moFF_parameters]
loc_in= absence_peak_data/
raw_repo= absence_peak_data/iRt_peptide_dataset/
xic_length= 4
rt_peak_win= 1
rt_peak_win_match= 1.1
tol= 5
cpu= 0
peptide_summary=
loc_out= absence_peak_data/output
sample=
ext=txt
log_label = moFF
w_filt = 1.5
out_flag= True
w_comb=
mbr = off
match_filter = True
ptm_file = ptm_setting_mq.json
quantile_thr_filtering = 0.85
sample_size = 0.10
I get
/usr/lib/python3.7/site-packages/sklearn/externals/joblib/externals/cloudpickle/cloudpickle.py:47: DeprecationWarning:
the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
moff_all.py:35: DeprecationWarning:
The SafeConfigParser class has been renamed to ConfigParser in Python 3.2. This alias will be removed in future versions. Use ConfigParser directly instead.
mbr peptide not detect in the input file, filtering of mbr peptides is not possible. Please set --match_filter to 0 and run again.
I tried to set match_filter = False
in the config file, which leads to the same error. Adding the argument on the commend line (either as False
or 0
) fails
$ python moff_all.py --match_filter False --config_file absence_peak_data/configuration_iRT.ini
/usr/lib/python3.7/site-packages/sklearn/externals/joblib/externals/cloudpickle/cloudpickle.py:47: DeprecationWarning:
the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
moff_all.py:35: DeprecationWarning:
The SafeConfigParser class has been renamed to ConfigParser in Python 3.2. This alias will be removed in future versions. Use ConfigParser directly instead.
usage: moff_all.py [-h] [--loc_in LOC_IN]
[--tsv_list [TSV_LIST [TSV_LIST ...]]]
[--raw_list [RAW_LIST [RAW_LIST ...]]] [--sample SAMPLE]
[--ext EXT] [--log_label LOG_LABEL] [--w_filt W_FILT]
[--out_flag] [--w_comb] [--tol TOL]
[--xic_length XIC_LENGTH] [--rt_peak_win RT_PEAK_WIN]
[--rt_peak_win_match RT_PEAK_WIN_MATCH]
[--raw_repo RAW_REPO] [--loc_out LOC_OUT]
[--rt_feat_file RT_FEAT_FILE] [--peptide_summary]
[--tag_pepsum TAG_PEPSUM] [--match_filter]
[--ptm_file PTM_FILE]
[--quantile_thr_filtering QUANTILE_THR_FILTERING]
[--sample_size SAMPLE_SIZE] [--mbr MBR] [--cpu CPU_NUM]
moff_all.py: error: unrecognized arguments: False
Hi,
To set false , you leave empy the field like :
match_filter=
In your case, it complained because mbr was set off and it should not be. To run it correctly you should set mbr and match_filter int the configuration file to:
mbr = on match_filter = True
I pushed a new version of the configuration file. this should work and it runs mbr + apex + filtering.
Regarding, to lin of code to change to run the absence of peak sample data, this necessary because search engines uses diffrent way to tag the modification. I will try change in the input file the original maxquant way to the one used in peptideshaker. Anyway, this discussion goes in a more complex discussion frame: 'how to deal with modifications in a standard way across different search engines'.
Anyway, this discussion goes in a more complex discussion frame: 'how to deal with modifications in a standard way across different search engines'.
That is a tricky one indeed.
I have now set mbr = on
as per your instructions. I now get
$ python moff_all.py --config_file absence_peak_data/configuration_iRT.ini
/usr/lib/python3.7/site-packages/sklearn/externals/joblib/externals/cloudpickle/cloudpickle.py:47: DeprecationWarning:
the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
moff_all.py:35: DeprecationWarning:
The SafeConfigParser class has been renamed to ConfigParser in Python 3.2. This alias will be removed in future versions. Use ConfigParser directly instead.
Matching between run module (mbr)
Created MBR output folder in : /home/lgatto/tmp/moFF/absence_peak_data/mbr_output
B002421_Ap_22cm_iRT_PRC-Hans_equimolar_100fmol.txt
B002419_Ap_22cm_iRT_PRC-Hans_equimolar_100fmol_inYeast.txt
B002417_Ap_22cm_iRT_PRC-Hans_equimolar_100fmol.txt
B002413_Ap_22cm_Yeast_171215184201.txt
Reading file: absence_peak_data/B002421_Ap_22cm_iRT_PRC-Hans_equimolar_100fmol.txt
Reading file: absence_peak_data/B002419_Ap_22cm_iRT_PRC-Hans_equimolar_100fmol_inYeast.txt
Reading file: absence_peak_data/B002417_Ap_22cm_iRT_PRC-Hans_equimolar_100fmol.txt
Reading file: absence_peak_data/B002413_Ap_22cm_Yeast_171215184201.txt
Read input --> done
Traceback (most recent call last):
File "moff_all.py", line 201, in <module>
res_state, output_list_loc = moff_mbr.run_mbr(args)
File "/home/lgatto/tmp/moFF/moff_mbr.py", line 515, in run_mbr
lambda x: combine_model(x, model_save, model_err, args.w_comb),axis=1)
File "/usr/lib/python3.7/site-packages/pandas/core/frame.py", line 6014, in apply
return op.get_result()
File "/usr/lib/python3.7/site-packages/pandas/core/apply.py", line 142, in get_result
return self.apply_standard()
File "/usr/lib/python3.7/site-packages/pandas/core/apply.py", line 248, in apply_standard
self.apply_series_generator()
File "/usr/lib/python3.7/site-packages/pandas/core/apply.py", line 277, in apply_series_generator
results[i] = self.f(v)
File "/home/lgatto/tmp/moFF/moff_mbr.py", line 515, in <lambda>
lambda x: combine_model(x, model_save, model_err, args.w_comb),axis=1)
File "/home/lgatto/tmp/moFF/moff_mbr.py", line 80, in combine_model
app_sum = app_sum + (model[ii].predict(x[ii])[0][0])
File "/usr/lib/python3.7/site-packages/sklearn/linear_model/base.py", line 213, in predict
return self._decision_function(X)
File "/usr/lib/python3.7/site-packages/sklearn/linear_model/base.py", line 196, in _decision_function
X = check_array(X, accept_sparse=['csr', 'csc', 'coo'])
File "/usr/lib/python3.7/site-packages/sklearn/utils/validation.py", line 540, in check_array
"if it contains a single sample.".format(array))
ValueError: ('Expected 2D array, got scalar array instead:\narray=1522.86.\nReshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.', 'occurred at index 0')
I know this issue.
Which version of scikit-learn are you using ?
Try to update to version 0.20.0. with :
conda
update scikit-learn`
>>> import sklearn
/usr/lib/python3.7/site-packages/sklearn/externals/joblib/externals/cloudpickle/cloudpickle.py:47: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
>>> sklearn.__version__
'0.20.0'
I installed it with my package manager.
Hi, I did some test and I am not able to reproduce this error.
Can you tell the version of python that you re using ? run conda list
list in your enviroment .
what happen if you downgrade to python 3.6 instead of py3.7 ? or just make a virtual env in conda with python 3.6.
Here's what I have
$ python --version
Python 3.7.1
$ ./bin/miniconda2/bin/conda list
# packages in environment at /home/lgatto/bin/miniconda2:
#
# Name Version Build Channel
argparse 1.4.0 py27_0 bioconda
asn1crypto 0.24.0 py27_0
backports.functools-lru-cache 1.5 <pip>
blas 1.0 mkl
brain-isotopic-distribution 1.4.0 <pip>
ca-certificates 2018.10.15 ha4d7672_0 conda-forge
certifi 2018.10.15 py27_1000 conda-forge
cffi 1.11.5 py27he75722e_1
chardet 3.0.4 py27_1
conda 4.5.11 py27_1000 conda-forge
conda-env 2.6.0 1
configparser 3.5.0 <pip>
cryptography 2.3.1 py27hc365091_0
cycler 0.10.0 <pip>
Cython 0.29 <pip>
decorator 4.3.0 py_0 conda-forge
enum34 1.1.6 py27_1
functools32 3.2.3.2 py_3 conda-forge
futures 3.2.0 py27_0
gettext 0.19.8.1 h5e8e0c9_1 conda-forge
idna 2.7 py27_0
intel-openmp 2019.0 118
ipaddress 1.0.22 py27_0
ipython_genutils 0.2.0 py_1 conda-forge
jsonschema 2.6.0 py27_1002 conda-forge
jupyter_core 4.4.0 py_0 conda-forge
kiwisolver 1.0.1 <pip>
libedit 3.1.20170329 h6b74fdf_2
libffi 3.2.1 hd88cf55_4
libgcc-ng 8.2.0 hdf63c60_1
libgfortran-ng 7.2.0 hdf63c60_3 conda-forge
libstdcxx-ng 8.2.0 hdf63c60_1
lxml 4.2.5 <pip>
matplotlib 2.2.3 <pip>
mkl 2019.0 118
mkl_fft 1.0.6 py27_0 conda-forge
mkl_random 1.0.1 py27_0 conda-forge
moff 1.2.1 py27_1 bioconda
mono 5.14.0.177 hfc679d8_0 conda-forge
nbformat 4.4.0 py_1 conda-forge
ncurses 6.1 hf484d3e_0
numpy 1.15.2 py27h1d66e8a_1
numpy-base 1.15.2 py27h81de0dd_1
openssl 1.0.2p h470a237_1 conda-forge
pandas 0.20.3 py27_1 conda-forge
pip 10.0.1 py27_0
pip 18.1 <pip>
plotly 3.3.0 py_0 conda-forge
pycosat 0.6.3 py27h14c3975_0
pycparser 2.18 py27_1
pymzml 0.7.10 py_1 bioconda
pyopenssl 18.0.0 py27_0
pyparsing 2.2.2 <pip>
pysocks 1.6.8 py27_0
pyteomics 3.5.1 <pip>
python 2.7.15 h1571d57_0
python-dateutil 2.7.3 py_0 conda-forge
pytz 2018.5 py_0 conda-forge
readline 7.0 h7b6447c_5
requests 2.19.1 py27_0
retrying 1.3.3 py_2 conda-forge
ruamel_yaml 0.15.46 py27h14c3975_0
scikit-learn 0.20.0 py27h4989274_1
scipy 1.1.0 py27hfa4b5c9_1
setuptools 40.2.0 py27_0
simplejson 3.16.1 py27h470a237_0 conda-forge
six 1.11.0 py27_1
sqlite 3.24.0 h84994c4_0
subprocess32 3.5.3 <pip>
tk 8.6.8 hbc83047_0
traitlets 4.3.2 py27_1000 conda-forge
urllib3 1.23 py27_0
wheel 0.31.1 py27_0
yaml 0.1.7 had09818_2
zlib 1.2.11 ha838bed_2
Could you point me to some documentation to create and run a virtual env in conda, as I'm not familiar with it and prefer not to mess up my installation, as I'm relying on it for other software at the moment.
@lgatto what strikes me at this output is that many packages coming from the default-channel and not conda-forge. Please make sure that conda-forge has the highest priority.
Whenever possible, I use my package manager to install python packages, but that's more out of convenience and familiarity. Would you advise to use conda install
instead?
And by make sure that conda-forge
has the highest priority, do you mean when installing? If so, then I suppose the following secures that
./bin/miniconda2/bin/conda config --add channels conda-forge
Warning: 'conda-forge' already in 'channels' list, moving to the top
moving to the top
Sounds good :)
This is what you should do https://bioconda.github.io/#set-up-channels
Or you can try conda create -n moff -c conda-forge -c bioconda moff
. Conda is great its worth to spend some time with it and try to manage all environments with conda.
$ python --version Python 3.7.1
$ ./bin/miniconda2/bin/conda list # packages in environment at /home/lgatto/bin/miniconda2: # # Name Version Build Channel argparse 1.4.0 py27_0 bioconda asn1crypto 0.24.0 py27_0 backports.functools-lru-cache 1.5 <pip> blas 1.0 mkl brain-isotopic-distribution 1.4.0 <pip> ca-certificates 2018.10.15 ha4d7672_0 conda-forge certifi 2018.10.15 py27_1000 conda-forge cffi 1.11.5 py27he75722e_1 chardet 3.0.4 py27_1 conda 4.5.11 py27_1000 conda-forge conda-env 2.6.0 1 configparser 3.5.0 <pip> cryptography 2.3.1 py27hc365091_0 cycler 0.10.0 <pip> Cython 0.29 <pip> decorator 4.3.0 py_0 conda-forge enum34 1.1.6 py27_1 functools32 3.2.3.2 py_3 conda-forge futures 3.2.0 py27_0 gettext 0.19.8.1 h5e8e0c9_1 conda-forge idna 2.7 py27_0 intel-openmp 2019.0 118 ipaddress 1.0.22 py27_0 ipython_genutils 0.2.0 py_1 conda-forge jsonschema 2.6.0 py27_1002 conda-forge jupyter_core 4.4.0 py_0 conda-forge kiwisolver 1.0.1 <pip> libedit 3.1.20170329 h6b74fdf_2 libffi 3.2.1 hd88cf55_4 libgcc-ng 8.2.0 hdf63c60_1 libgfortran-ng 7.2.0 hdf63c60_3 conda-forge libstdcxx-ng 8.2.0 hdf63c60_1 lxml 4.2.5 <pip> matplotlib 2.2.3 <pip> mkl 2019.0 118 mkl_fft 1.0.6 py27_0 conda-forge mkl_random 1.0.1 py27_0 conda-forge moff 1.2.1 py27_1 bioconda mono 5.14.0.177 hfc679d8_0 conda-forge nbformat 4.4.0 py_1 conda-forge ncurses 6.1 hf484d3e_0 numpy 1.15.2 py27h1d66e8a_1 numpy-base 1.15.2 py27h81de0dd_1 openssl 1.0.2p h470a237_1 conda-forge pandas 0.20.3 py27_1 conda-forge pip 10.0.1 py27_0 pip 18.1 <pip> plotly 3.3.0 py_0 conda-forge pycosat 0.6.3 py27h14c3975_0 pycparser 2.18 py27_1 pymzml 0.7.10 py_1 bioconda pyopenssl 18.0.0 py27_0 pyparsing 2.2.2 <pip> pysocks 1.6.8 py27_0 pyteomics 3.5.1 <pip> python 2.7.15 h1571d57_0 python-dateutil 2.7.3 py_0 conda-forge pytz 2018.5 py_0 conda-forge readline 7.0 h7b6447c_5 requests 2.19.1 py27_0 retrying 1.3.3 py_2 conda-forge ruamel_yaml 0.15.46 py27h14c3975_0 scikit-learn 0.20.0 py27h4989274_1 scipy 1.1.0 py27hfa4b5c9_1 setuptools 40.2.0 py27_0 simplejson 3.16.1 py27h470a237_0 conda-forge six 1.11.0 py27_1 sqlite 3.24.0 h84994c4_0 subprocess32 3.5.3 <pip> tk 8.6.8 hbc83047_0 traitlets 4.3.2 py27_1000 conda-forge urllib3 1.23 py27_0 wheel 0.31.1 py27_0 yaml 0.1.7 had09818_2 zlib 1.2.11 ha838bed_2
Ok I see the problem. Conda is using python 2.7, that why you get the error.
once minicondna is installed $ conda config --add channels bioconda $ conda config --add channels conda-forge $ conda create --yes -n moff python=3.6 $ source activate moff $ conda install numpy scipy scikit-learn pandas simplejson pyteomics pymzml $ conda install --yes -c conda-forge mono $ conda update pymzml $ conda install --yes -c conda-forge brain-isotopic-distribution
if everything goes fine, iopen a python command line the version should be 3.6. In this enviroment you can run moFF with the filtering. Let me know if this work for you.
Are you sure about the
Found it - it's source activate moff
on its own?activate moff
on linux.
Here is where I got. I managed to install everything by pyteomics
$ ~/bin/miniconda2/bin/conda install pyteomics
Solving environment: done
## Package Plan ##
environment location: /home/lgatto/bin/miniconda2
added / updated specs:
- pyteomics
The following NEW packages will be INSTALLED:
backports: 1.0-py_2 conda-forge
backports.functools_lru_cache: 1.5-py_1 conda-forge
backports_abc: 0.5-py_1 conda-forge
cycler: 0.10.0-py_1 conda-forge
dbus: 1.13.0-h3a4f0e9_0 conda-forge
expat: 2.2.5-hfc679d8_2 conda-forge
fontconfig: 2.13.1-h65d0f4c_0 conda-forge
freetype: 2.9.1-h6debe1e_4 conda-forge
glib: 2.56.2-h464dc38_1 conda-forge
gst-plugins-base: 1.12.5-hde13a9d_0 conda-forge
gstreamer: 1.12.5-h5856ed1_0 conda-forge
icu: 58.2-hfc679d8_0 conda-forge
jpeg: 9c-h470a237_1 conda-forge
kiwisolver: 1.0.1-py27h2d50403_2 conda-forge
libiconv: 1.15-h470a237_3 conda-forge
libpng: 1.6.35-ha92aebf_2 conda-forge
libuuid: 2.32.1-h470a237_2 conda-forge
libxcb: 1.13-h470a237_2 conda-forge
libxml2: 2.9.8-h422b904_5 conda-forge
libxslt: 1.1.32-h88dbc4e_2 conda-forge
lxml: 4.2.5-py27hc9114bc_0 conda-forge
matplotlib: 2.2.3-py27h8e2386c_0 conda-forge
pcre: 8.41-hfc679d8_3 conda-forge
pthread-stubs: 0.4-h470a237_1 conda-forge
pyparsing: 2.3.0-py_0 conda-forge
pyqt: 5.6.0-py27h8210e8a_7 conda-forge
pyteomics: 3.5.1-py_2 bioconda
qt: 5.6.2-hf70d934_9 conda-forge
singledispatch: 3.4.0.3-py27_1000 conda-forge
sip: 4.18.1-py27hfc679d8_0 conda-forge
sqlalchemy: 1.2.13-py27h470a237_0 conda-forge
subprocess32: 3.5.3-py27h470a237_0 conda-forge
tornado: 5.1.1-py27h470a237_0 conda-forge
xorg-libxau: 1.0.8-h470a237_6 conda-forge
xorg-libxdmcp: 1.1.2-h470a237_7 conda-forge
xz: 5.2.4-h470a237_1 conda-forge
Proceed ([y]/n)? y
Preparing transaction: done
Verifying transaction: done
Executing transaction: failed
ERROR conda.core.link:_execute(502): An error occurred while installing package 'bioconda::pyteomics-3.5.1-py_2'.
IOError(13, 'Permission denied')
Attempting to roll back.
Rolling back transaction: done
IOError(13, 'Permission denied')
I still went on and tried moFF
:
$ python moff_all.py --config_file absence_peak_data/configuration_iRT.ini
/usr/lib/python3.7/site-packages/sklearn/externals/joblib/externals/cloudpickle/cloudpickle.py:47: DeprecationWarning:
the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
moff_all.py:35: DeprecationWarning:
The SafeConfigParser class has been renamed to ConfigParser in Python 3.2. This alias will be removed in future versions. Use ConfigParser directly instead.
Matching between run module (mbr)
MBR Output folder in : /home/lgatto/tmp/moFF/absence_peak_data/mbr_output
B002421_Ap_22cm_iRT_PRC-Hans_equimolar_100fmol.txt
B002419_Ap_22cm_iRT_PRC-Hans_equimolar_100fmol_inYeast.txt
B002417_Ap_22cm_iRT_PRC-Hans_equimolar_100fmol.txt
B002413_Ap_22cm_Yeast_171215184201.txt
Reading file: absence_peak_data/B002421_Ap_22cm_iRT_PRC-Hans_equimolar_100fmol.txt
Reading file: absence_peak_data/B002419_Ap_22cm_iRT_PRC-Hans_equimolar_100fmol_inYeast.txt
Reading file: absence_peak_data/B002417_Ap_22cm_iRT_PRC-Hans_equimolar_100fmol.txt
Reading file: absence_peak_data/B002413_Ap_22cm_Yeast_171215184201.txt
Read input --> done
Traceback (most recent call last):
File "moff_all.py", line 201, in <module>
res_state, output_list_loc = moff_mbr.run_mbr(args)
File "/home/lgatto/tmp/moFF/moff_mbr.py", line 515, in run_mbr
lambda x: combine_model(x, model_save, model_err, args.w_comb),axis=1)
File "/usr/lib/python3.7/site-packages/pandas/core/frame.py", line 6014, in apply
return op.get_result()
File "/usr/lib/python3.7/site-packages/pandas/core/apply.py", line 142, in get_result
return self.apply_standard()
File "/usr/lib/python3.7/site-packages/pandas/core/apply.py", line 248, in apply_standard
self.apply_series_generator()
File "/usr/lib/python3.7/site-packages/pandas/core/apply.py", line 277, in apply_series_generator
results[i] = self.f(v)
File "/home/lgatto/tmp/moFF/moff_mbr.py", line 515, in <lambda>
lambda x: combine_model(x, model_save, model_err, args.w_comb),axis=1)
File "/home/lgatto/tmp/moFF/moff_mbr.py", line 80, in combine_model
app_sum = app_sum + (model[ii].predict(x[ii])[0][0])
File "/usr/lib/python3.7/site-packages/sklearn/linear_model/base.py", line 213, in predict
return self._decision_function(X)
File "/usr/lib/python3.7/site-packages/sklearn/linear_model/base.py", line 196, in _decision_function
X = check_array(X, accept_sparse=['csr', 'csc', 'coo'])
File "/usr/lib/python3.7/site-packages/sklearn/utils/validation.py", line 540, in check_array
"if it contains a single sample.".format(array))
ValueError: ('Expected 2D array, got scalar array instead:\narray=1522.86.\nReshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.', 'occurred at index 0')
Not sure if this error is related to the previous pyteomics
installation error.
It is same error before, but pyteomics it is not what cause this error.
About pyteomics error , I really dont have any clues why it is not possible to install .@bgruening have any ideas about that ?
if you run
source moff
conda list
which version of python do you have ? 3.6.xx ?
`
$ ~/bin/miniconda2/bin/activate moff
$ ~/bin/miniconda2/bin/conda list
# packages in environment at /home/lgatto/bin/miniconda2:
#
# Name Version Build Channel
argparse 1.4.0 py27_0 bioconda
asn1crypto 0.24.0 py27_0
backports.functools-lru-cache 1.5 <pip>
blas 1.0 mkl
brain-isotopic-distribution 1.4.0 py27_0 conda-forge
ca-certificates 2018.10.15 ha4d7672_0 conda-forge
certifi 2018.10.15 py27_1000 conda-forge
cffi 1.11.5 py27he75722e_1
chardet 3.0.4 py27_1
conda 4.5.11 py27_1000 conda-forge
conda-env 2.6.0 1
configparser 3.5.0 <pip>
cryptography 2.3.1 py27hc365091_0
cycler 0.10.0 <pip>
Cython 0.29 <pip>
decorator 4.3.0 py_0 conda-forge
enum34 1.1.6 py27_1
functools32 3.2.3.2 py_3 conda-forge
futures 3.2.0 py27_0
gettext 0.19.8.1 h5e8e0c9_1 conda-forge
idna 2.7 py27_0
intel-openmp 2019.0 118
ipaddress 1.0.22 py27_0
ipython_genutils 0.2.0 py_1 conda-forge
jsonschema 2.6.0 py27_1002 conda-forge
jupyter_core 4.4.0 py_0 conda-forge
kiwisolver 1.0.1 <pip>
libedit 3.1.20170329 h6b74fdf_2
libffi 3.2.1 hd88cf55_4
libgcc-ng 8.2.0 hdf63c60_1
libgfortran-ng 7.2.0 hdf63c60_3 conda-forge
libstdcxx-ng 8.2.0 hdf63c60_1
lxml 4.2.5 <pip>
matplotlib 2.2.3 <pip>
mkl 2019.0 118
mkl_fft 1.0.6 py27_0 conda-forge
mkl_random 1.0.1 py27_0 conda-forge
moff 1.2.1 py27_1 bioconda
mono 5.14.0.177 hfc679d8_0 conda-forge
nbformat 4.4.0 py_1 conda-forge
ncurses 6.1 hf484d3e_0
numpy 1.15.2 py27h1d66e8a_1
numpy-base 1.15.2 py27h81de0dd_1
openssl 1.0.2p h470a237_1 conda-forge
pandas 0.20.3 py27_1 conda-forge
pip 10.0.1 py27_0
pip 18.1 <pip>
plotly 3.3.0 py_0 conda-forge
pycosat 0.6.3 py27h14c3975_0
pycparser 2.18 py27_1
pymzml 2.0.5 py_0 bioconda
pyopenssl 18.0.0 py27_0
pyparsing 2.2.2 <pip>
pysocks 1.6.8 py27_0
pyteomics 3.5.1 <pip>
python 2.7.15 h1571d57_0
python-dateutil 2.7.3 py_0 conda-forge
pytz 2018.5 py_0 conda-forge
readline 7.0 h7b6447c_5
requests 2.19.1 py27_0
retrying 1.3.3 py_2 conda-forge
ruamel_yaml 0.15.46 py27h14c3975_0
scikit-learn 0.20.0 py27h4989274_1
scipy 1.1.0 py27hfa4b5c9_1
setuptools 40.2.0 py27_0
simplejson 3.16.1 py27h470a237_0 conda-forge
six 1.11.0 py27_1
sqlite 3.24.0 h84994c4_0
tk 8.6.8 hbc83047_0
traitlets 4.3.2 py27_1000 conda-forge
urllib3 1.23 py27_0
wheel 0.31.1 py27_0
yaml 0.1.7 had09818_2
zlib 1.2.11 ha838bed_2
As for python, I thought that using conda create --yes -n moff python=3.6
would force that environment to use 3.6. How could I check this? If I just type
$ python --version
Python 3.7.1
or
$ ~/bin/miniconda2/bin/python --version
Python 2.7.15 :: Anaconda, Inc.
none of which match, but I don't know if that's correct.
well there is somethig weird in the the env that conda ha created. you are right we force the moff env to have python 3.6.
Here you can find some resources on how to manage the enviroment.
after source activate moff
you should see something like (moff) $
in you prompt and the python version inside the enviroment should be 3.6
Thank you - I'll double check that I am using the right environment when installing and report back.
Ok, I have sorted it out. My environment wasn't activated properly and I ended up installing packages in moff and base, hence previous errors.
Things look good so far
python moff_all.py --config_file absence_peak_data/configuration_iRT.ini
Please install pynumpress: pip install pynumpress
No module named 'brainpy._c.composition'
Matching between run module (mbr)
MBR Output folder in : /home/lgatto/tmp/moFF/absence_peak_data/mbr_output
B002421_Ap_22cm_iRT_PRC-Hans_equimolar_100fmol.txt
B002419_Ap_22cm_iRT_PRC-Hans_equimolar_100fmol_inYeast.txt
B002417_Ap_22cm_iRT_PRC-Hans_equimolar_100fmol.txt
B002413_Ap_22cm_Yeast_171215184201.txt
Reading file: absence_peak_data/B002421_Ap_22cm_iRT_PRC-Hans_equimolar_100fmol.txt
Reading file: absence_peak_data/B002419_Ap_22cm_iRT_PRC-Hans_equimolar_100fmol_inYeast.txt
Reading file: absence_peak_data/B002417_Ap_22cm_iRT_PRC-Hans_equimolar_100fmol.txt
Reading file: absence_peak_data/B002413_Ap_22cm_Yeast_171215184201.txt
Read input --> done
matched features 8296 MS2 features 128
matched features 2449 MS2 features 5815
matched features 8321 MS2 features 106
matched features 1272 MS2 features 7084
Apex module...
Starting Apex for absence_peak_data/mbr_output/B002421_Ap_22cm_iRT_PRC-Hans_equimolar_100fmol_match.txt ...
moff Input file: absence_peak_data/mbr_output/B002421_Ap_22cm_iRT_PRC-Hans_equimolar_100fmol_match.txt XIC_tol 5.0 XIC_win 4.0000 moff_rtWin_peak 1.0000
RAW file from folder : absence_peak_data/iRt_peptide_dataset/
Output file in : absence_peak_data/output
Apex module has detected mbr peptides
starting estimation of quality measures..
quality measures estimation using 13 MS2 ident. peptides randomly sampled
MAD retention time along all isotope count 12.000000
mean 0.877151
std 1.868001
min 0.000000
25% 0.079480
50% 0.306373
75% 0.663477
max 6.714187
Name: RT_drift, dtype: float64
Estimated distribition ratio exp. int. left isotope vs. monoisotopic isotope count 1.00000
mean 0.68308
std NaN
min 0.68308
25% 0.68308
50% 0.68308
75% 0.68308
max 0.68308
Name: delta_log_int, dtype: float64
quality threhsold estimated : MAD_retetion_time 0.8922599999999874 Ratio Int. FakeIsotope/1estIsotope: 0.6830798256998425
starting apex quantification of MS2 peptides..
end apex quantification of MS2 peptides..
starting quantification with matched peaks using the quality filtering...
initial # matched peaks: (8296, 15)
Thank you again for your help and patience. In addition to getting moff
running, which is great on its own, I have learned about conda environments and how to manage them, which I'm pretty happy about.
After installing all dependencies, the following command (based on the documentation) finished instantly without producing any output
I imagine I must have done something wrong, but in the absence of any warning or error, I can't debug further - any help would be appreciated. Also, and this might be related to the above, I noticed that if the configuration file doesn't exist, for example running
the application stops without any error or warning.