Open daichengxin opened 2 weeks ago
I have analysed this dataset many times. Never had issues.
You can trace the memory consumption while it's running.
Me too, but the last release of quantms 1.3.0 uses OpenMS 3.2.0. We have other ongoing problems with this version and mzTab export in ProteinQuantifier. @timosachsenberg Can you help us here?
log does not indicate that it is the export. is there a way we can find out where/when this regression was introduced?
I retried openms 3.2.0 and traced the memory. It does exceed the memory. Why is it out of memory? mzML only has 10G. I haven't encountered this before either
Thanks for checking. This it is really suspicious. Can you reproduce this e.g., for one or two files?
I can reproduce this in two files. But a single file are work. Test files: https://www.dropbox.com/scl/fi/jgbw0pvnm18cga1kwgy54/proteomicslfq.zip?rlkey=6igoyec9ffztk9p8f4uriukct&st=osldx9cp&dl=0
I can confirm that it uses 400gb for two small files during feature extraction. My first guess would be that something inside e.g. the OpenSWATH code might have changed.
Likely related to a different conversion using TRFP. The file works with ProteoWizard msconvert
Description of the bug
The errors are reported when i ran PXD001819 LFQ datasets (about 10G mzML files). It looks like it's running out of memory? But the available memory is 120G. So I'm not sure if this is normal or not.
Command used and terminal output
Relevant files
log file: proteomicslfq.log
System information
quantms 1.3.0