sneumann / xcms

This is the git repository matching the Bioconductor package xcms: LC/MS and GC/MS Data Analysis
Other
183 stars 80 forks source link

chromatogram refine issues #636

Closed g0079 closed 2 years ago

g0079 commented 2 years ago

Hi all, I'm a beginner of XCMS. After learning XCMS, I'm going to try it out with a batch of my own data. When I get to the peak refine step, I get the following error message. My data was acquired as DDA on an Agilent 6545 QTOF. My R version is 4.2.1 and my XCMS version is 3.18.0. I tried to change the parameters of mnpp, only the error message changed, but the error is still reported. Could you please help me find out what the problem is? Thank you very much.

raw_file = fs::dir_ls("raw_data", recurse = T, glob = "*.mzML") sample_name = str_extract(basename(raw_file), pattern = ".*(?=\\.)") sample_group = c(rep("QC", 3), rep("OriginA", 4), rep("OriginB", 4), rep("OriginC", 4), rep("OriginD", 4)) pd = data.frame(sample_name, sample_group) raw_data = readMSData(raw_file, pdata = new("NAnnotatedDataFrame", pd), mode = "onDisk") cwp = CentWaveParam(ppm = 10, peakwidth = c(10, 40), noise = 1000) caasData = findChromPeaks(raw_data, param = cwp) mnpp = MergeNeighboringPeaksParam(expandRt = 4) caasData = refineChromPeaks(caasData, param = mnpp) Evaluating 702 peaks in file Sam3-2.mzML for merging ... Stop worker failed with the error: wrong args for environment subassignment Error: BiocParallel errors 0 remote errors, element index: 19 unevaluated and other errors first remote error: In addition: Warning messages: 1: In serialize(data, node$con) : 'package:stats' may not be available when loading 2: In serialize(data, node$con) : 'package:stats' may not be available when loading 3: In serialize(data, node$con) : 'package:stats' may not be available when loading 4: In serialize(data, node$con) : 'package:stats' may not be available when loading 5: In serialize(data, node$con) : 'package:stats' may not be available when loading

JunYang2021 commented 2 years ago

I've just encountered this issue. I think you might be using R in windows, too. The reason is that R in Windows automatically open multiple processes parallelly to finish computation. But here one of R process may be closed or no longer accessible from the main thread. The solution is reducing the number of computation processes. You can use this code to get your physical cores number. parallel::detectCores(logical = FALSE) Then set processes number (less than total physical cores number): register(bpstart(SnowParam(2))) ## To start 2 processes for parallel processing(for example) Then you can try your refinement again. If it doesn't work, you can use register(SerialParam()) to cancel prarallel processing, but it would be slow.

g0079 commented 2 years ago

I've just encountered this issue. I think you might be using R in windows, too. The reason is that R in Windows automatically open multiple processes parallelly to finish computation. But here one of R process may be closed or no longer accessible from the main thread. The solution is reducing the number of computation processes. You can use this code to get your physical cores number. parallel::detectCores(logical = FALSE) Then set processes number (less than total physical cores number): register(bpstart(SnowParam(2))) ## To start 2 processes for parallel processing(for example) Then you can try your refinement again. If it doesn't work, you can use register(SerialParam()) to cancel prarallel processing, but it would be slow.

It worked! In the end, I used register(SerialParam()) to process the data one by one, and no more errors were reported. I really can't think of this as the reason, can this be called a bug? Thank you very much!@JunYang2021

JunYang2021 commented 2 years ago

I found this solution from this issue https://github.com/sneumann/xcms/issues/627. It was said that parallel processing in Windows is more easily to be corrupted. Maybe in Linux, this problem will not happen. @g0079

jorainer commented 2 years ago

Great! Thanks @JunYang2021 for answering and providing a solution!

I'm closing the issue now - feel free to re-open if needed.