zamboni-lab / SLAW

Scalable and self-optimizing processing workflow for untargeted LC-MS
GNU General Public License v2.0
26 stars 3 forks source link

Java error #4

Closed dwalke04 closed 2 years ago

dwalke04 commented 2 years ago

Hello @adelabriere SLAW seems to be working well for OpenMS. However, when I try to run different peak picking algorithms (in this case ADAP), I seem to be running into issues with Java. I received this error on both a Mac and PC, both using Docker. There error is reprinted below, and I've attached the parameters and debug file.

Thanks! OpenJDK Runtime Environment (build 12.0.2+10) OpenJDK 64-Bit Server VM (build 12.0.2+10, mixed mode) Can't load log handler "java.util.logging.FileHandler" java.nio.file.NoSuchFileException: log/mzmine.log.lck java.nio.file.NoSuchFileException: log/mzmine.log.lck at java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92) at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) at java.base/sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) at java.base/java.nio.channels.FileChannel.open(FileChannel.java:292) at java.base/java.nio.channels.FileChannel.open(FileChannel.java:345) at java.logging/java.util.logging.FileHandler.openFiles(FileHandler.java:511) at java.logging/java.util.logging.FileHandler.(FileHandler.java:278) at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:500) at java.base/java.lang.reflect.ReflectAccess.newInstance(ReflectAccess.java:166) at java.base/jdk.internal.reflect.ReflectionFactory.newInstance(ReflectionFactory.java:404) at java.base/java.lang.Class.newInstance(Class.java:590) at java.logging/java.util.logging.LogManager.createLoggerHandlers(LogManager.java:1000) at java.logging/java.util.logging.LogManager$4.run(LogManager.java:970) at java.logging/java.util.logging.LogManager$4.run(LogManager.java:966) at java.base/java.security.AccessController.doPrivileged(AccessController.java:310) at java.logging/java.util.logging.LogManager.loadLoggerHandlers(LogManager.java:966) at java.logging/java.util.logging.LogManager.initializeGlobalHandlers(LogManager.java:2417) at java.logging/java.util.logging.LogManager$RootLogger.accessCheckedHandlers(LogManager.java:2511) at java.logging/java.util.logging.Logger.getHandlers(Logger.java:2089) at java.logging/java.util.logging.Logger.log(Logger.java:976) at java.logging/java.util.logging.Logger.doLog(Logger.java:1006) at java.logging/java.util.logging.Logger.log(Logger.java:1029) at java.logging/java.util.logging.Logger.info(Logger.java:1802) at net.sf.mzmine.main.MZmineCore.main(MZmineCore.java:82) SLAW_Debug_Log_12Nov.txt parameters.txt

adelabriere commented 2 years ago

HI @dwalke04 I did not manage to reproduce your issue on a windows computer. Moreover the java processing actually seems to have finished correctly, as the tables were exported:

2021-11-11|02:52:20|INFO: Peak picking finished 0 files not processed on a total of 80

the java message only seems to come from the log. The real error come later and I did not manage to reproduce it:

  File "wrapper_docker.py", line 223, in <module>
    exp.filter_peaktables(filtering_string)
  File "/pylcmsprocessing/model/experiment.py", line 674, in filter_peaktables
    executor.map(partial(fp.par_peaktable_filtering,peak_filter=peak_filter),all_peaktables)
  File "/usr/lib/python3.8/multiprocessing/pool.py", line 364, in map
    return self._map_async(func, iterable, mapstar, chunksize).get()
  File "/usr/lib/python3.8/multiprocessing/pool.py", line 771, in get
    raise self._value
KeyError: 'intensity'

It seems to indicate that a peaktable ended up in the wrong format, they are converted from MZmine output to a more generic format that you can find in ADAP/peaktables. Could you check what is in the ADAP/peaktables folder, and see if there is an empty peaktable, aswell as copying here a header of a table here. Could you also tell me how big in Mb is each table approximately, see if there some with a really different size ? ADAP is the less used peakpicking in our lab, it is the slowest and the hardest to optimize because of its high number of parameters, it is also the most memory consuming, maybe for now use CENTWAVE or openMS until I figure what is wrong with this one.

Thanks, Alexis

dwalke04 commented 2 years ago

Hi @adelabriere, Thank you for this additional information. I checked the ADAP peak table folder, all of the peak table files except 1 were empty and 1 byte in size. There was one peak table file that was 67 bytes, and only contained the table headers. I've attached the file with the headers, and another blank file. The issue seems to be due to not writing the peak tables to the files. The merged MSMS files were output to the ADAP folder. Please let me know if I can provide anything else to help resolve this issue.

If we are not able to run the ADAP peak detection, does this still allow us to perform the optimization step? I was under the impression it would evaluate all three peak picking routines.

HRE_HFX_211014_HRE0037_C18pos_024.csv HRE_HFX_211014_HRE0037_C18pos_025.csv

adelabriere commented 2 years ago

It is not the case. Maybe it will be added in the future. It seems like a reasonable idea and I ll maybe add it in the feature. However this bug seems pretty serious. Would you be okay with sending me subset of mzML (5 should be enough) and the parameters files at my email so that I can investigate and see if I manage to reproduce it ?

dwalke04 commented 2 years ago

Thank you! I just emailed you some test files and the parameters file.

adelabriere commented 2 years ago

I can confirm that I reproduced the bug with the last version of the docker, but not with the release version of 1 week. It is surprising since I did not update anything related to MZmine. So the issue is on my side. and I am investigating.

dwalke04 commented 2 years ago

Thank you!

adelabriere commented 2 years ago

Dear @dwalke04

The bug was caused by a very off initial parametrization of ADAP as no peak at all was detected, it was not a SLAW problem. I modified the min_scan, rt_wavelet and peakwidth to better fit the data.

It is the hardest one to tune so for this one you still have to play with the parameters in some cases as this.

I changed the default ADAP parameters to be more permissive on adelabriere/slaw:dev and try to avoid this situation.

I sent you an updated parameters.txt by email.