lina-usc / pylossless

🧠 EEG Processing pipeline that annotates continuous data
https://pylossless.readthedocs.io/en/latest/
MIT License
18 stars 8 forks source link

Lossless Logging Localization #79

Closed Andesha closed 1 year ago

Andesha commented 1 year ago

In the old version there was a fairly robust logging system for each stage that was being executed.

Previously I believe this was just information being printed to stdout and then redirected to a file through a SLURM option...

What we should do instead is some of the following:

scott-huberty commented 1 year ago

Figure out what from the MNE side of things is being written to stdout and find out if we need it for logging

  • We can capture this somehow I think

MNE uses a function mne.utils.logger. The verbosity level can be set by the user/program too (standard, debug, etc): https://mne.tools/stable/logging.html

We could just use mne.utils.logger in our pipeline too (i.e. logger.info(), logger.debug(). ) you'll see in pylossless.pipeline.py that I already imported this logger.

But if you want to suggest a different tool (for example logging in the standard library) i'm open to that too.

scott-huberty commented 1 year ago

Bonus points if we can include some emojis like :white_check_mark: for major steps of the pipeline ; )

Andesha commented 1 year ago

Bonus points if we can include some emojis like white_check_mark for major steps of the pipeline ; )

I don't know what that would do on the cluster, but let's try :laughing:

We could just use mne.utils.logger in our pipeline too (i.e. logger.info(), logger.debug(). ) you'll see in pylossless.pipeline.py that I already imported this logger.

I had forgotten (or not noticed?), but that sounds just right!

christian-oreilly commented 1 year ago

Yes, indeed, it should definitely rely and extend on MNE logging utils.... I thought we discussed this in a separate issue, but could not find it. Maybe we just discussed it and never did anything with it.

Note @Andesha that we did not implement anything specifically SLURM execution. It can be used in a SLURM environment more or less easily by calling the python script from BASH, but I had in mind (although not high in my priority list) to include a somewhat more user-friently integration (i.e., adding the SLURM conf into our YAML project conf and launching the SLURM job directly through Python)