Open drammock opened 2 years ago
+1 I recently started working on this project. I will come back soon with a couple of questions/recommendation for the raw data browser. Feel free to contact me for anything!
@Grifunf I'm going to answer your forum question here, let's keep discussion about implementation details on GitHub from now on. Here's your question:
I have started working on this project as part of my diploma thesis. I aim to enhance both amplitude and time scaling settings, as well as the monitor sensitivity mentioned before. My only concern for the time being are the values of the dropboxes, which you claim to be standardised. I can agree for the time related ones, but what about the amplitude dropboxes? I saw the screenshots for EEG, but what about the other types of channels? My approach would be to use the already used Inverted Norm and populate the dropboxes with standard items, around that norm. If not, i guess it would require some kind of dictionary with the commonly used items for every type, as well as a valid source for all this information. Anyways, please feel free to contact me for any information needed.
MEG isn't used much for diagnostic purposes, so it's unlikely that there are "standard" sensitivity scalings out there for MEG channels. Moreover, it will start to get hectic if we add separate dropboxes for every channel type. Some ideas of how to do it:
scalings
dict will accept string values formatted like "10 uV/mm"
and then calculate the corresponding float value behind-the-scenes (taking into account display DPI). This should generalize fairly well to all channel types (and let you control them independently).Hello, and thank you for the prompt reply!
First of all, I'm fairly new to the MNE and encephalography, so any remarks or corrections are more than welcome. I would like to share more details about the small progress I've made so far and the direction I'm in, and have your feedback on them.
For starters, I have distinguished 2 tasks:
I had started working on the Amplitude Settings Dialog. The start idea was to be able to scale 1) all channels, 2) a specific channel type and 3) a specific channel, either from the default dropbox items or via manual input (that is why i chose dropboxes over dropdown menus). Here is a screenshot of the idea: As you can see, the example file I was working on has a lot of channel types (Ofc all names, units and values are placeholders can be changed according to our needs). My initial thinking was to make the functionality able to support all those different types.
So here are my first 2 questions:
I am sorry for the long post, please take your time to respond. Thanks in advance.
- Only the EEG channels matter? I got confused by your comment on MEG.
My understanding is that the "sensitivity" and "timebase" scaling are used for diagnostic applications of EEG (trying to detect epileptic events, sleep abnormalities, etc), where it is critically important for the clinician to see the same "zoom level" every time they look at a recording. MEG is not normally used for disease diagnosis (only for research), therefore "sensitivity" and "timebase" are not normally used, and standardized sensitivity values for MEG data are not in common use as far as I know.
The STIM channel type is probably useless and can easily be removed, but in my example file there are more MEG than EEG channels.
The number of individual channels doesn't matter, it's the number of channel types we need to worry about. I agree that scaling STIM channels is usually not necessary/helpful.
Also, I've read in the documentation that there are even more types, so it makes sense to be able to support them all.
I disagree, for the reasons stated above. Standardized sensitivity values are part of clinical diagnostic workflows, but are not widely used in research, and only some channel types are used diagnostically. So to me it doesn't make sense to choose sensitivity values for MAG, GRAD, FNIRS, etc. channel types when there aren't pre-established standard sensitivity values already in clinical use for those data types.
So this is why I think that we should consider approaches that either (1) are flexible in what sensitivity are allowed (i.e., making the scalings
dict accept strings like '10 uV/mm'
for EEG or '10 fT/mm'
for magnetometers), or (2) only support the standardized EEG values for sensitivity, and scale all other channel types according to the conversion factor implied by the chosen EEG sensitivity.
@Grifunf we discussed this issue today in our developer's meeting. Here is a recommended course of action from that discussion. Each of these could be a separate PR, though steps 1 and 2 might be fairly easy to do all at once in one PR.
Notes:
'10 uV/mm'
in the scalings dict will probably happen more-or-less for free by the time we reach step 3 or 4.You have been really helpful, thanks again. For starters I will work with your guidelines. I would only like to bring up a couple of issues of my own:
scalings
dict is a good idea, one question however is how will this interact with the existent global scale_factor
and the Zoom In/Out buttons, but I guess they can work together. Example: if the user chooses 10 uV sensitivity and then doubles via the buttons, it should go to 5 uV and scale accordingly, even if it has a different overall factor from the rest of the channels.mne
object so they are accessible from elsewhere and to avoid repetition?I haven't followed this discussion very closely, but I still wanted to add my thoughts. I see two issues with the current plan:
That said, I'm fine with adding sensitivity-based scalings, but I'd prefer to have them for both backends.
- It looks like this feature will be implemented only for the Qt-based browser. Up until now, we've tried to avoid that the two backends diverge, so I think it would be good to involve more people in the decision if we want to do that.
There is no reason not to add it to the MPL browser too. In general I'm happy if contributors do one or the other and add an issue to remind us to restore parity later. Asking to do both at the same time can be a deterrent, and anyway the Qt browser is in a separate repo, so separate PRs are necessary anyway.
- The main motivation for implementing new scalings seem to be clinical applications.
"clinical" can mean diagnostic practice, or it can mean research on diseased/disordered populations. So one use case is "clinicians (who are used to certain ways of viewing data) doing research (not diagnosis)".
Legally, we already disclaim all warranties via our license, and doctors are not allowed to use it anyway in their clinical practice, but still. Should we include a warning, at least in the docs?
I've seen explicit disclaimers in the licenses of other software (like 3D slicer) saying things like "we neither recommend nor endorse using this for diagnosis". Recent conversations I've had convinced me that we don't need that because (at least in the US) the main constraints (1) only apply if you're selling the medical device (software is a "device" by FDA definitions) and (2) mostly are about what claims you can make when marketing the device. As long as we say things like "useful for clinical applications" and don't say "you can use this to diagnose epilepsy" then we're fine.
- Time scaling feels like something that shouldn't be different across multiple channels/types. It would probably be a design nightmare too, but I think it has no realistic use case, and could possibly be a nuisance for the user. Please correct me if I'm wrong.
agreed, time scaling should be one setting for all channel types.
- Using only the scalings dict is a good idea, one question however is how will this interact with the existent global scale_factor and the Zoom In/Out buttons, but I guess they can work together. Example: if the user chooses 10 uV sensitivity and then doubles via the buttons, it should go to 5 uV and scale accordingly, even if it has a different overall factor from the rest of the channels.
We need to think about whether it would make sense to change the behavior of the +/- buttons (or disable them completely?) when using predefined sensitivity settings.
- I will probably exclude the STIM channels from this.
STIM channels will be affected by timebase, and they already can be affected by the scalings
dict. But yeah I don't think we need any sensitivity dropdown to scale stim channels.
- On this note, although this should probably be another issue, I am using the ch_types_ordered and the inv_norm snippets for the same purposes as the ScaleBars. They also exist in other places in the file. Could we possibly add those to the mne object so they are accessible from elsewhere and to avoid repetition?
If those variables are needed then sure, you can add them there. Just know that fig.mne
is currently undocumented and not very well organized; things might move around (or be made private) at some point. That shouldn't deter you though; end-users shouldn't need access to those variables, and it's OK for module-internal code to access private variables.
FWIW, I just noticed that EDFbrowser lets users override the detected DPI like this:
Maybe something to consider adding to our browsers as well?
Hello again @drammock . I 'm sorry for having to ask here, but I 'm kinda lost on how to modify in place specific types, from within the pg_figure module. The dicts you mentioned seem to only affect the command line function on the startup of the browser, and the Scalebars. I'm also wondering if apply_function is of any relevance. Sorry if my question is answered elsewhere or doesn't make sense, I would appreciate any kind of advice.
I'm kinda lost on how to modify in place specific types, from within the pg_figure module.
It's hard to know for sure what you're asking here. It would be much easier to answer if you could open a work-in-progress pull request on the mne_qt_browser repository, and point to specific line(s) of code where you're trying and failing to do something.
The dicts you mentioned seem to only affect the command line function on the startup of the browser, and the Scalebars.
Do you mean the scalings
dict? You are right, modifying that won't have any effect once the plot is created.
I'm also wondering if apply_function is of any relevance.
I don't think so; we don't want to modify the underlying data, we just want to change how much of it is displayed (time base) and how zoomed in it is vertically (sensitivity).
The best advice I can give is to work on the steps I outlined previously, one at a time, in order, and open a work-in-progress draft pull request as soon as possible. It is much much easier for us to help you when looking at and commenting directly on the code, than when discussing problems abstractly here.
Hello again! So i have opened the first draft pull request and I hope I will be able to complete it soon. I have a couple of questions for the time being:
array[array * scale > threshold]=np.nan
.
I need to use the index of the array in order to use the correct scale factor. For example:
arr[arr[i] * scale[i] > threshold]=np.nan
I tried some for-loop and enumerate iterations but it has become incredibly slow. Do you have any advice on how to tackle this?Thanks in advance!
I'm actually in the middle of reviewing that right this moment. stand by for feedback on the open PR
From the forum: https://mne.discourse.group/t/scaling-sensitivity-uv-mm/5079
The idea is that the raw data browser should have a way to take into account the DPI of the display when setting the signal scaling (both amplitude scaling, and when setting the duration of signal shown onscreen). This is apparently important for clinical practice, where certain waveform patterns are diagnostically useful, and visual recognition of those patterns is dependent on knowing the scaling (e.g., "if I see such-and-such pattern when viewing at 7mm/uV and 30mm/sec, I know it is was a seizure")
The linked forum issue has details about which scaling values are diagnostically useful.