nipy / PySurfer

Cortical neuroimaging visualization in Python
https://pysurfer.github.io/
BSD 3-Clause "New" or "Revised" License
239 stars 98 forks source link

visual artifacts when using threshold in add_data #303

Open rmasiso opened 3 years ago

rmasiso commented 3 years ago

I am getting some artifacts when I try to use the threshold argument when visualizing a numpy array in pysurfer.

I was unable to install pysurfer on a Mac running MacOS Sierra 10.12.6 with an issue surrounding vtk. I was able to run pysurfer when I used the environment file in the latest MNE github repo: https://github.com/mne-tools/mne-python (see bottom for environment.yml contents).

I created the conda env below which used Python 3.9.6 with the following line:

conda env create -f environment.yml

I then opened a jupyter notebook and ran the following two lines under an fsaverage6 brain:

brain.add_data(L ,min=L_min, max=L_max, thresh=None, colormap='hot', alpha=1, time=None, colorbar=True, hemi='lh',  mid=None, center=None, transparent=False, 
         verbose=None,)

brain.add_data(R ,min=R_min, max=R_max, thresh=R_min,colormap='hot', alpha=1, time=None, colorbar=True, hemi='rh',  mid=None, center=None, transparent=False, 
         verbose=None,vector_alpha=1,vertices=R_verts)

The first line (left hemisphere) does not use the threshold argument while the second line (right hemisphere) does use the threshold argument. L and R are numpy arrays of length 40962 (since I'm using fsaverage6). As you can see from the picture below, I get some interesting artifacts that I can't seem to remove. I've tried specifying the vertices to render (as you can see from code above), I've also tried turning the nan vertices into 0s to see if it's just an issue with nans, but it is not. The left side of the image below doesn't have the artifacts, but it renders the nans as default red color which I wanted to remove by using threshold. While using threshold does remove the red default color, I get weird aliasing looking artifacts.

pysurfer_aliasing

And a close-up:

image

Any help would be appreciated! Thank you so much!

environment.yml file below

name: mne
channels:
- conda-forge
dependencies:
- python>=3.8
- pip
- numpy
- scipy
- matplotlib
- numba
- pandas
- xlrd
- scikit-learn
- h5py
- pillow
- statsmodels
- jupyter
- joblib
- psutil
- numexpr
- imageio
- tqdm
- spyder-kernels>=1.10.0
- imageio-ffmpeg>=0.4.1
- vtk>=9.0.1
- pyvista>=0.30
- pyvistaqt>=0.4
- qdarkstyle
- darkdetect
- mayavi
- PySurfer
- dipy
- nibabel
- nilearn
- python-picard
- pyqt!=5.15.3
- mne
- mffpy>=0.5.7
- ipywidgets
- pip:
  - ipyvtklink
larsoner commented 2 years ago

These blending issues are tricky to fix. You could try the PyVista-based backend to mne.viz.Brain instead, here alpha blending is done in the CPU so a single surface is used. It shoouldn't have these problems

rmasiso commented 2 years ago

These blending issues are tricky to fix. You could try the PyVista-based backend to mne.viz.Brain instead, here alpha blending is done in the CPU so a single surface is used. It shoouldn't have these problems

Thanks so much for the quick reply!!

I did end up doing it that way, with the mne pyvista backend, but I was having trouble customizing certain things like the locations of colorbars (see image below). I also really liked the customizability of controlling color values like here. But, I can probably do that with pyvista too, right? I guess I got used to some of the pysurfer and mayavi functionality.

This is the code for visualizing the same things through mne (if it is helpful for any future researchers):

Brain = mne.viz.get_brain_class()
brain = Brain('fsaverage6', hemi='split', surf='5_inflated',
              subjects_dir=subjects_dir, size=(800, 800),views=['lat', 'med'],background='white',show_toolbar=True)

brain.add_data(L ,fmin=L_min, fmax=L_max, thresh=None, colormap='hot', alpha=1, 
               time=None, colorbar=True, hemi='lh',  fmid=None, center=None, transparent=False, verbose=None)

brain.add_data(R ,fmin=R_min, fmax=R_max, thresh=None,colormap='hot', alpha=1, 
               time=None, colorbar=True, hemi='rh',  fmid=None, center=None, transparent=False, 
               verbose=None,vector_alpha=1,vertices=R_verts)

visualization (with colorbar placement issue):

image

Do you know of any easy ways (or code examples you have encountered) to redo some of the classic example gallery pysurfer visualizations like the RGB color and plotting activation foci from MNI coordinates?

For example, I would have expected the pysurfer wrapper for brain.add_foci to work as it does in this example.

But when I use:

seed_coords = (-45, -67, 36)
brain.add_foci(seed_coords, map_surface=None, hemi='lh')

instead of getting the marker on angular gyrus I get this (it looks like it might be close on one axis, but incorrectly scaled in another): image

I understand if you can't answer these questions. I might go to the mne support, but I might as well check with you too!

Thanks again for your quick response!

larsoner commented 2 years ago

add_foci in millimeters (rather than using a vertex number) on the inflated surface is strange. It will probably match / be correct if you use pial for example, or if you switch to a vertex number

But yes please use https://mne.discourse.group for these usage questions. These do seem like usecases we want to support over in MNE

rmasiso commented 2 years ago

add_foci in millimeters (rather than using a vertex number) on the inflated surface is strange. It will probably match / be correct if you use pial for example, or if you switch to a vertex number

But yes please use https://mne.discourse.group for these usage questions. These do seem like usecases we want to support over in MNE

Thanks again! I've moved over this discussion to a new thread at mne.discourse.group.