cbrnr / mnelab

MNELAB – a GUI for MNE
BSD 3-Clause "New" or "Revised" License
236 stars 68 forks source link

Synergy with TimeView? #137

Closed lxkain closed 7 months ago

lxkain commented 4 years ago

Hello,

I am one of the authors and contributors of

https://github.com/TimeViewers/timeview

It seems like our software and your software have some overlapping goals. I'm wondering if it would make sense to have a conversation about this?

Alex

cbrnr commented 4 years ago

Hi @lxkain! Nice project! It seems like TimeView focusses on audio-like signals (or at least the interface and many functions are tailored towards such time series). In contrast, MNELAB implements a GUI on top of MNE-Python, which is an EEG/MEG processing toolbox. However, I saw that TimeView supports EDF, which is often used to store EEG/MEG and other biosignals, and also OpenXDF (see below).

Do you have specific ideas where our projects could benefit from each other?

For example, one of the weaknesses of MNELAB (at least IMO) is that it uses matplotlib-based visualizations instead of native Qt ones. TimeView uses pyqtgraph to visualize time series, which I like a lot. This could be one area that I could imagine might be useful for MNELAB in the future.

In other areas I think our projects might be rather different even though superficially they might look the same. For instance, both MNELAB and TimeView support XDF files, but these are different formats with the same name: MNELAB supports XDF, whereas TimeView supports OpenXDF - which I have never heard of before. Or maybe I'm wrong and you do want to focus on biosignals?

I'm pinging @tstenner, @cboulay, and @chkothe just in case they didn't know about the OpenXDF format either (if this is relevant for us feel free to open a new issue in one of our XDF repositories).

tstenner commented 4 years ago

Both OpenXDF and XDF claim to have the same goals and features, but they are entirely different formats.

cbrnr commented 4 years ago

I wouldn't say the same goals, but related ones. Otherwise, agreed.

lxkain commented 4 years ago

TimeView has traditionally focused on audio as well as polysomnography signals (thus the .edf support). All the functionality in the processing menu is provided by easy-to-write custom python software, so it can easily be extended to new functionality. The interface itself does not have audio functionality baked in, for example it doesn't offer playback or recording.

I agree with you that the strength of TimeView is that it uses the pyqtgraph library, which leads to very fast graphical updates.

As far as input/output formats go, I feel this is an issue orthogonal to the interface presentation and functionality. Would you be willing to watch the video at https://vimeo.com/245480108 to see what I mean by functionality? I find the possibility of presenting and editing labels an important feature in my work, and I imagine others would find it useful too.

As for I/O, that should probably also be taken care of custom python plug-ins that convert any useful file format to the the native format (a light layer on top of numpy).

Funding of our project has run out, and we are hoping to join our code with the right group. Since we will be doing new work in NIRS in addition to PSG and audio, we would like to position us closer to MNE.tools.

cbrnr commented 4 years ago

I watched the video - again, great work!

MNELAB aims to provide a graphical wrapper to MNE-Python. More specifically, the two major use cases of MNELAB are:

  1. Supplement MNE-Python because some tasks are much quicker done in a GUI than with Python code (for example, opening and browsing a file to view its contents and associated meta data).
  2. Make MNE-Python more accessible to users that are not comfortable writing Python code. By providing the same workflow, users can learn the corresponding Python commands as they use the GUI (there is a history feature which logs every action and provides the corresponding MNE-Python command).

I did not want to implement new functionality, but rather stick to the MNE way of processing neurophysiological data (this is not quite true because MNELAB does contain some additional stuff which is not part of MNE-Python yet). This allows MNELAB to quickly integrate a large amount of functionality without too much work. Sure some things are not really ideal (for example the time viewer is just a rather clunky Matplotlib figure), but I don't have funding to work on this full-time (in fact, this is really a side project mostly developed in my spare time).

Regarding presenting and editing labels, MNE-Python (and thus MNELAB) uses its Matplotlib-based viewer to create, edit, and show annotations. My guess is that these two labels work completely differently under the hood, so it will take some effort to make them compatible.

Here are some questions that came to my mind when trying to use TimeViewer as an EEG viewer:

  1. EEG/MEG viewers typically show individual channels in separate non-overlapping regions. I was wondering if you can do that in TimeView automatically. Suppose you want to view a 64-channel EDF file, is there a way to automatically create separate panels for all channels?
  2. Is there an option to set the background of a panel to e.g. white?
  3. Does TimeView support irregularly sampled time signals (i.e. timestamp-based signals)? This would be nice for viewing XDF files.
  4. Does TimeView support channels with different sampling rates (GDF and I think even EDF can contain such data)?

You might want to involve MNE-Python core developers in this discussion. You could ask in our Gitter channel https://gitter.im/mne-tools/mne-python?source=orgpage and maybe link to this discussion if anyone wants to make a comment.

lxkain commented 4 years ago

Thank you for explaining further. It seems like there is tighter integration between MNELAB and MNE-Python than I at first realized. As to your questions:

(1) It would be very easy to implement. In fact, all of the UI has an API, for example one can write:

from timeview.api import Track, Wave, TimeView

# read from disk
wav = Track.read(Path(__file__).with_name('speech-mwm.wav'))
lab = Track.read(Path(__file__).with_name('speech-mwm.lab'))

# create ourselves
fs = 16000
x = np.zeros(2 * fs, dtype=np.float64)
x[1 * fs] = 1
syn = Wave(x, fs)

app = TimeView()
app.add_view(wav, 0, y_min=-10_000, y_max=10_000)
app.add_view(lab, 0)
app.add_view(wav, 1, renderer_name='Spectrogram')  # linked
app.add_view(lab, 1)  # linked
app.add_view(syn, 2)
app.add_view(syn, 2, renderer_name='Spectrogram', y_max=4000)  # linked

app.start()

(2) We have not implemented this, but it would be easy to add this under rendering options.

(3) Yes, we call that "TimeValue" tracks under the hood, although now that I think about it we haven't yet exposed that at the UI level, because we haven't had to deal with those kinds of files ourselves up until now.

(4) Yes.

If we get to the point where we have more resources to work on this, we would be happy to set some common goals, or something like that. Meanwhile, I am glad that I am aware of your project now!

cbrnr commented 7 months ago

Meanwhile, we have created our own PyQtGraph-based EEG browser (https://github.com/mne-tools/mne-qt-browser). I still think it would be great to collaborate, but this should happen at their repository, so I'm closing this issue here now. Please feel free to open an issue over there!